[infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

William Burns mudokonman at gmail.com
Thu Mar 1 11:26:43 EST 2018


On Thu, Mar 1, 2018 at 11:21 AM William Burns <mudokonman at gmail.com> wrote:

> On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT <tsegismont at gmail.com>
> wrote:
>
>> 2018-03-01 16:36 GMT+01:00 Tristan Tarrant <ttarrant at redhat.com>:
>>
>>> You need to use the brand new CacheAdmin API:
>>>
>>>
>>> http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches
>>
>>
>> I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.
>>
>> Is there any way to achieve these goals with 9.1.x?
>>
>
> You could try using the ClusterExecutor to invoke getCache across all
> nodes. Note it has to return null since a Cache is not Serializable.
>
>
Fixed typo below, sorry


> String cacheName = ;
> cache.getCacheManager().executor().submitConsumer(cm -> {
>          cm.getCache(cacheName);
>          return null;
>       }, (a, v, t) -> {
>
            if (t != null) {
>                System.out.println("There was an exception " + t + "
> retrieving " + cacheName + " from node: " + a);
>             }
>          }
>       );
>
>
>>
>>
>>
>>>
>>>
>>> Tristan
>>>
>>> On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
>>> > Hi,
>>> >
>>> > This email follows up on my testing of the Infinispan Cluster Manager
>>> > for Vert.x on Kubernetes.
>>> >
>>> > In one of the tests, we want to make sure that, after a rolling update
>>> > of the application, the data submitted to Vert.x' AsyncMap is still
>>> > present. And I found that when the underlying cache is predefined in
>>> > infinispan.xml, the data is present, otherwise it's not.
>>> >
>>> > I pushed a simple reproducer on GitHub:
>>> > https://github.com/tsegismont/cachedataloss
>>> >
>>> > The code does this:
>>> > - a first node is started, and creates data
>>> > - new nodes are started, but they don't invoke cacheManager.getCache
>>> > - the initial member is killed
>>> > - a "testing" member is started, printing out the data in the console
>>> >
>>> > Here are my findings.
>>> >
>>> > 1/ Even when caches are declared in infinispan.xml, the data is lost
>>> > after the initial member goes away.
>>> >
>>> > A little digging showed that the caches are really distributed only
>>> > after you invoke cacheManager.getCache
>>> >
>>> > 2/ Checking cluster status "starts" triggers distribution
>>> >
>>> > I was wondering why the behavior was not the same as with my Vert.x
>>> > testing on Openshift. And then realized the only difference was the
>>> > cluster readiness check, which reads the cluster health. So I updated
>>> > the reproducer code to add such a check (still without invoking
>>> > cacheManager.getCache). Then the caches defined in infinispan.xml have
>>> > their data distributed.
>>> >
>>> > So,
>>> >
>>> > 1/ How can I make sure caches are distributed on all nodes, even if
>>> some
>>> > nodes never try to get a reference with cacheManager.getCache, or don't
>>> > check cluster health?
>>> > 2/ Are we doing something wrong with our way to declare the default
>>> > configuration for caches [1][2]?
>>> >
>>> > Thanks,
>>> > Thomas
>>> >
>>> > [1]
>>> >
>>> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
>>> > [2]
>>> >
>>> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>>> >
>>> > _______________________________________________
>>> > infinispan-dev mailing list
>>> > infinispan-dev at lists.jboss.org
>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> >
>>>
>>> --
>>> Tristan Tarrant
>>> Infinispan Lead and Data Grid Architect
>>> JBoss, a division of Red Hat
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/b8a368d9/attachment.html 


More information about the infinispan-dev mailing list