From ttarrant at redhat.com Thu Mar 1 03:40:00 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 09:40:00 +0100 Subject: [infinispan-dev] 9.3 branch in a week Message-ID: <48668822-18fb-569e-7e3e-49384325ea76@redhat.com> Hi all, we will branch for 9.3 on March 7th. Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From ttarrant at redhat.com Thu Mar 1 04:12:33 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 10:12:33 +0100 Subject: [infinispan-dev] ci.infinispan.org Message-ID: Hi all, just a few notes on ci.infinispan.org: - Added a permanent redirect rule from http to https - Refreshed JDKs (9.0.4, 1.8.0_161, 1.8.0_sr5fp10) - Updated Maven to 3.5.2 and Ant to 1.10.2 - Installed git 2.9.3 from the Software Collections to resolve the issue of shallow clones not working correctly Additionally, the envinject plugin for Jenkins is preventing the inherited environment variables from leaking into the agent build. While this creates more reliable builds, it also caused failures in the WildFly integration tests because they could not resolve env.JAVA_HOME. I have therefore added a line in Jenkinsfile for master that selects the JDK tool() to use for the build. Unfortunately there is no way for declarative pipelines to parameterize this for other JDKs, so we will probably have to adopt a different strategy in order to build with different JDKs. Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From dan.berindei at gmail.com Thu Mar 1 04:40:31 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 1 Mar 2018 09:40:31 +0000 Subject: [infinispan-dev] ci.infinispan.org In-Reply-To: References: Message-ID: Thanks Tristan! Looking at the git fetch command line I'm not sure it should work at all: --depth 10 is at the end, but man git fetch says it should be before the refspec. And yet I never suspected it was the cause :) I would have liked to also fetch a single branch, that would have made the fetch even faster, but I haven't figured out how to do that from the job configuration UI yet. Dan On Thu, Mar 1, 2018 at 9:12 AM, Tristan Tarrant wrote: > Hi all, > > just a few notes on ci.infinispan.org: > > - Added a permanent redirect rule from http to https > - Refreshed JDKs (9.0.4, 1.8.0_161, 1.8.0_sr5fp10) > - Updated Maven to 3.5.2 and Ant to 1.10.2 > - Installed git 2.9.3 from the Software Collections to resolve the issue > of shallow clones not working correctly > > Additionally, the envinject plugin for Jenkins is preventing the > inherited environment variables from leaking into the agent build. While > this creates more reliable builds, it also caused failures in the > WildFly integration tests because they could not resolve env.JAVA_HOME. > I have therefore added a line in Jenkinsfile for master that selects the > JDK tool() to use for the build. > Unfortunately there is no way for declarative pipelines to parameterize > this for other JDKs, so we will probably have to adopt a different > strategy in order to build with different JDKs. > > Tristan > -- > Tristan Tarrant > Infinispan Lead and Data Grid Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/c9647349/attachment.html From tsegismont at gmail.com Thu Mar 1 10:30:22 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Thu, 1 Mar 2018 16:30:22 +0100 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? Message-ID: Hi, This email follows up on my testing of the Infinispan Cluster Manager for Vert.x on Kubernetes. In one of the tests, we want to make sure that, after a rolling update of the application, the data submitted to Vert.x' AsyncMap is still present. And I found that when the underlying cache is predefined in infinispan.xml, the data is present, otherwise it's not. I pushed a simple reproducer on GitHub: https://github.com/tsegismont/cachedataloss The code does this: - a first node is started, and creates data - new nodes are started, but they don't invoke cacheManager.getCache - the initial member is killed - a "testing" member is started, printing out the data in the console Here are my findings. 1/ Even when caches are declared in infinispan.xml, the data is lost after the initial member goes away. A little digging showed that the caches are really distributed only after you invoke cacheManager.getCache 2/ Checking cluster status "starts" triggers distribution I was wondering why the behavior was not the same as with my Vert.x testing on Openshift. And then realized the only difference was the cluster readiness check, which reads the cluster health. So I updated the reproducer code to add such a check (still without invoking cacheManager.getCache). Then the caches defined in infinispan.xml have their data distributed. So, 1/ How can I make sure caches are distributed on all nodes, even if some nodes never try to get a reference with cacheManager.getCache, or don't check cluster health? 2/ Are we doing something wrong with our way to declare the default configuration for caches [1][2]? Thanks, Thomas [1] https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10 [2] https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/3170cc64/attachment.html From ttarrant at redhat.com Thu Mar 1 10:36:53 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 16:36:53 +0100 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: References: Message-ID: You need to use the brand new CacheAdmin API: http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches Tristan On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: > Hi, > > This email follows up on my testing of the Infinispan Cluster Manager > for Vert.x on Kubernetes. > > In one of the tests, we want to make sure that, after a rolling update > of the application, the data submitted to Vert.x' AsyncMap is still > present. And I found that when the underlying cache is predefined in > infinispan.xml, the data is present, otherwise it's not. > > I pushed a simple reproducer on GitHub: > https://github.com/tsegismont/cachedataloss > > The code does this: > - a first node is started, and creates data > - new nodes are started, but they don't invoke cacheManager.getCache > - the initial member is killed > - a "testing" member is started, printing out the data in the console > > Here are my findings. > > 1/ Even when caches are declared in infinispan.xml, the data is lost > after the initial member goes away. > > A little digging showed that the caches are really distributed only > after you invoke cacheManager.getCache > > 2/ Checking cluster status "starts" triggers distribution > > I was wondering why the behavior was not the same as with my Vert.x > testing on Openshift. And then realized the only difference was the > cluster readiness check, which reads the cluster health. So I updated > the reproducer code to add such a check (still without invoking > cacheManager.getCache). Then the caches defined in infinispan.xml have > their data distributed. > > So, > > 1/ How can I make sure caches are distributed on all nodes, even if some > nodes never try to get a reference with cacheManager.getCache, or don't > check cluster health? > 2/ Are we doing something wrong with our way to declare the default > configuration for caches [1][2]? > > Thanks, > Thomas > > [1] > https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10 > [2] > https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22 > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From tsegismont at gmail.com Thu Mar 1 11:14:47 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Thu, 1 Mar 2018 17:14:47 +0100 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: References: Message-ID: 2018-03-01 16:36 GMT+01:00 Tristan Tarrant : > You need to use the brand new CacheAdmin API: > > http://infinispan.org/docs/stable/user_guide/user_guide. > html#obtaining_caches I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2. Is there any way to achieve these goals with 9.1.x? > > > Tristan > > On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: > > Hi, > > > > This email follows up on my testing of the Infinispan Cluster Manager > > for Vert.x on Kubernetes. > > > > In one of the tests, we want to make sure that, after a rolling update > > of the application, the data submitted to Vert.x' AsyncMap is still > > present. And I found that when the underlying cache is predefined in > > infinispan.xml, the data is present, otherwise it's not. > > > > I pushed a simple reproducer on GitHub: > > https://github.com/tsegismont/cachedataloss > > > > The code does this: > > - a first node is started, and creates data > > - new nodes are started, but they don't invoke cacheManager.getCache > > - the initial member is killed > > - a "testing" member is started, printing out the data in the console > > > > Here are my findings. > > > > 1/ Even when caches are declared in infinispan.xml, the data is lost > > after the initial member goes away. > > > > A little digging showed that the caches are really distributed only > > after you invoke cacheManager.getCache > > > > 2/ Checking cluster status "starts" triggers distribution > > > > I was wondering why the behavior was not the same as with my Vert.x > > testing on Openshift. And then realized the only difference was the > > cluster readiness check, which reads the cluster health. So I updated > > the reproducer code to add such a check (still without invoking > > cacheManager.getCache). Then the caches defined in infinispan.xml have > > their data distributed. > > > > So, > > > > 1/ How can I make sure caches are distributed on all nodes, even if some > > nodes never try to get a reference with cacheManager.getCache, or don't > > check cluster health? > > 2/ Are we doing something wrong with our way to declare the default > > configuration for caches [1][2]? > > > > Thanks, > > Thomas > > > > [1] > > https://github.com/tsegismont/cachedataloss/blob/master/src/ > main/resources/infinispan.xml#L10 > > [2] > > https://github.com/tsegismont/cachedataloss/blob/master/src/ > main/resources/infinispan.xml#L22 > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead and Data Grid Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/cb7b8a00/attachment.html From mudokonman at gmail.com Thu Mar 1 11:21:42 2018 From: mudokonman at gmail.com (William Burns) Date: Thu, 01 Mar 2018 16:21:42 +0000 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT wrote: > 2018-03-01 16:36 GMT+01:00 Tristan Tarrant : > >> You need to use the brand new CacheAdmin API: >> >> >> http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches > > > I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2. > > Is there any way to achieve these goals with 9.1.x? > You could try using the ClusterExecutor to invoke getCache across all nodes. Note it has to return null since a Cache is not Serializable. String cacheName = ; cache.getCacheManager().executor().submitConsumer(cm -> { cm.getCache(cacheName); return null; }, (a, v, t) -> { if (v != null) { System.out.println("There was an exception retrieving " + cacheName + " from node: " + a); } } ); > > > >> >> >> Tristan >> >> On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: >> > Hi, >> > >> > This email follows up on my testing of the Infinispan Cluster Manager >> > for Vert.x on Kubernetes. >> > >> > In one of the tests, we want to make sure that, after a rolling update >> > of the application, the data submitted to Vert.x' AsyncMap is still >> > present. And I found that when the underlying cache is predefined in >> > infinispan.xml, the data is present, otherwise it's not. >> > >> > I pushed a simple reproducer on GitHub: >> > https://github.com/tsegismont/cachedataloss >> > >> > The code does this: >> > - a first node is started, and creates data >> > - new nodes are started, but they don't invoke cacheManager.getCache >> > - the initial member is killed >> > - a "testing" member is started, printing out the data in the console >> > >> > Here are my findings. >> > >> > 1/ Even when caches are declared in infinispan.xml, the data is lost >> > after the initial member goes away. >> > >> > A little digging showed that the caches are really distributed only >> > after you invoke cacheManager.getCache >> > >> > 2/ Checking cluster status "starts" triggers distribution >> > >> > I was wondering why the behavior was not the same as with my Vert.x >> > testing on Openshift. And then realized the only difference was the >> > cluster readiness check, which reads the cluster health. So I updated >> > the reproducer code to add such a check (still without invoking >> > cacheManager.getCache). Then the caches defined in infinispan.xml have >> > their data distributed. >> > >> > So, >> > >> > 1/ How can I make sure caches are distributed on all nodes, even if some >> > nodes never try to get a reference with cacheManager.getCache, or don't >> > check cluster health? >> > 2/ Are we doing something wrong with our way to declare the default >> > configuration for caches [1][2]? >> > >> > Thanks, >> > Thomas >> > >> > [1] >> > >> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10 >> > [2] >> > >> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22 >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> >> -- >> Tristan Tarrant >> Infinispan Lead and Data Grid Architect >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/b171fabf/attachment.html From ttarrant at redhat.com Thu Mar 1 11:25:51 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 17:25:51 +0100 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: References: Message-ID: <97c97155-46b2-ec0d-906b-0aaa10afc381@redhat.com> Why not just prestart caches ? On 3/1/18 5:14 PM, Thomas SEGISMONT wrote: > > 2018-03-01 16:36 GMT+01:00 Tristan Tarrant >: > > You need to use the brand new CacheAdmin API: > > http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches > > > > I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2. > > Is there any way to achieve these goals with 9.1.x? > > > > > Tristan > > On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: > > Hi, > > > > This email follows up on my testing of the Infinispan Cluster Manager > > for Vert.x on Kubernetes. > > > > In one of the tests, we want to make sure that, after a rolling > update > > of the application, the data submitted to Vert.x' AsyncMap is still > > present. And I found that when the underlying cache is predefined in > > infinispan.xml, the data is present, otherwise it's not. > > > > I pushed a simple reproducer on GitHub: > > https://github.com/tsegismont/cachedataloss > > > > > The code does this: > > - a first node is started, and creates data > > - new nodes are started, but they don't invoke cacheManager.getCache > > - the initial member is killed > > - a "testing" member is started, printing out the data in the console > > > > Here are my findings. > > > > 1/ Even when caches are declared in infinispan.xml, the data is lost > > after the initial member goes away. > > > > A little digging showed that the caches are really distributed only > > after you invoke cacheManager.getCache > > > > 2/ Checking cluster status "starts" triggers distribution > > > > I was wondering why the behavior was not the same as with my Vert.x > > testing on Openshift. And then realized the only difference was the > > cluster readiness check, which reads the cluster health. So I updated > > the reproducer code to add such a check (still without invoking > > cacheManager.getCache). Then the caches defined in infinispan.xml > have > > their data distributed. > > > > So, > > > > 1/ How can I make sure caches are distributed on all nodes, even > if some > > nodes never try to get a reference with cacheManager.getCache, or > don't > > check cluster health? > > 2/ Are we doing something wrong with our way to declare the default > > configuration for caches [1][2]? > > > > Thanks, > > Thomas > > > > [1] > > > https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10 > > > [2] > > > https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22 > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > -- > Tristan Tarrant > Infinispan Lead and Data Grid Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From mudokonman at gmail.com Thu Mar 1 11:26:43 2018 From: mudokonman at gmail.com (William Burns) Date: Thu, 01 Mar 2018 16:26:43 +0000 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 11:21 AM William Burns wrote: > On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT > wrote: > >> 2018-03-01 16:36 GMT+01:00 Tristan Tarrant : >> >>> You need to use the brand new CacheAdmin API: >>> >>> >>> http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches >> >> >> I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2. >> >> Is there any way to achieve these goals with 9.1.x? >> > > You could try using the ClusterExecutor to invoke getCache across all > nodes. Note it has to return null since a Cache is not Serializable. > > Fixed typo below, sorry > String cacheName = ; > cache.getCacheManager().executor().submitConsumer(cm -> { > cm.getCache(cacheName); > return null; > }, (a, v, t) -> { > if (t != null) { > System.out.println("There was an exception " + t + " > retrieving " + cacheName + " from node: " + a); > } > } > ); > > >> >> >> >>> >>> >>> Tristan >>> >>> On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: >>> > Hi, >>> > >>> > This email follows up on my testing of the Infinispan Cluster Manager >>> > for Vert.x on Kubernetes. >>> > >>> > In one of the tests, we want to make sure that, after a rolling update >>> > of the application, the data submitted to Vert.x' AsyncMap is still >>> > present. And I found that when the underlying cache is predefined in >>> > infinispan.xml, the data is present, otherwise it's not. >>> > >>> > I pushed a simple reproducer on GitHub: >>> > https://github.com/tsegismont/cachedataloss >>> > >>> > The code does this: >>> > - a first node is started, and creates data >>> > - new nodes are started, but they don't invoke cacheManager.getCache >>> > - the initial member is killed >>> > - a "testing" member is started, printing out the data in the console >>> > >>> > Here are my findings. >>> > >>> > 1/ Even when caches are declared in infinispan.xml, the data is lost >>> > after the initial member goes away. >>> > >>> > A little digging showed that the caches are really distributed only >>> > after you invoke cacheManager.getCache >>> > >>> > 2/ Checking cluster status "starts" triggers distribution >>> > >>> > I was wondering why the behavior was not the same as with my Vert.x >>> > testing on Openshift. And then realized the only difference was the >>> > cluster readiness check, which reads the cluster health. So I updated >>> > the reproducer code to add such a check (still without invoking >>> > cacheManager.getCache). Then the caches defined in infinispan.xml have >>> > their data distributed. >>> > >>> > So, >>> > >>> > 1/ How can I make sure caches are distributed on all nodes, even if >>> some >>> > nodes never try to get a reference with cacheManager.getCache, or don't >>> > check cluster health? >>> > 2/ Are we doing something wrong with our way to declare the default >>> > configuration for caches [1][2]? >>> > >>> > Thanks, >>> > Thomas >>> > >>> > [1] >>> > >>> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10 >>> > [2] >>> > >>> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22 >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> >>> -- >>> Tristan Tarrant >>> Infinispan Lead and Data Grid Architect >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/b8a368d9/attachment.html From tsegismont at gmail.com Thu Mar 1 11:35:33 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Thu, 1 Mar 2018 17:35:33 +0100 Subject: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes? In-Reply-To: <97c97155-46b2-ec0d-906b-0aaa10afc381@redhat.com> References: <97c97155-46b2-ec0d-906b-0aaa10afc381@redhat.com> Message-ID: 2018-03-01 17:25 GMT+01:00 Tristan Tarrant : > Why not just prestart caches ? > > How can you do that? Will it work for caches created after the node has started? > On 3/1/18 5:14 PM, Thomas SEGISMONT wrote: > > > > 2018-03-01 16:36 GMT+01:00 Tristan Tarrant > >: > > > > You need to use the brand new CacheAdmin API: > > > > http://infinispan.org/docs/stable/user_guide/user_guide. > html#obtaining_caches > > html#obtaining_caches> > > > > > > I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2. > > > > Is there any way to achieve these goals with 9.1.x? > > > > > > > > > > Tristan > > > > On 3/1/18 4:30 PM, Thomas SEGISMONT wrote: > > > Hi, > > > > > > This email follows up on my testing of the Infinispan Cluster > Manager > > > for Vert.x on Kubernetes. > > > > > > In one of the tests, we want to make sure that, after a rolling > > update > > > of the application, the data submitted to Vert.x' AsyncMap is > still > > > present. And I found that when the underlying cache is predefined > in > > > infinispan.xml, the data is present, otherwise it's not. > > > > > > I pushed a simple reproducer on GitHub: > > > https://github.com/tsegismont/cachedataloss > > > > > > > > The code does this: > > > - a first node is started, and creates data > > > - new nodes are started, but they don't invoke > cacheManager.getCache > > > - the initial member is killed > > > - a "testing" member is started, printing out the data in the > console > > > > > > Here are my findings. > > > > > > 1/ Even when caches are declared in infinispan.xml, the data is > lost > > > after the initial member goes away. > > > > > > A little digging showed that the caches are really distributed > only > > > after you invoke cacheManager.getCache > > > > > > 2/ Checking cluster status "starts" triggers distribution > > > > > > I was wondering why the behavior was not the same as with my > Vert.x > > > testing on Openshift. And then realized the only difference was > the > > > cluster readiness check, which reads the cluster health. So I > updated > > > the reproducer code to add such a check (still without invoking > > > cacheManager.getCache). Then the caches defined in infinispan.xml > > have > > > their data distributed. > > > > > > So, > > > > > > 1/ How can I make sure caches are distributed on all nodes, even > > if some > > > nodes never try to get a reference with cacheManager.getCache, or > > don't > > > check cluster health? > > > 2/ Are we doing something wrong with our way to declare the > default > > > configuration for caches [1][2]? > > > > > > Thanks, > > > Thomas > > > > > > [1] > > > > > https://github.com/tsegismont/cachedataloss/blob/master/src/ > main/resources/infinispan.xml#L10 > > master/src/main/resources/infinispan.xml#L10> > > > [2] > > > > > https://github.com/tsegismont/cachedataloss/blob/master/src/ > main/resources/infinispan.xml#L22 > > master/src/main/resources/infinispan.xml#L22> > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > -- > > Tristan Tarrant > > Infinispan Lead and Data Grid Architect > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead and Data Grid Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180301/79d3c0f5/attachment-0001.html From ttarrant at redhat.com Thu Mar 1 14:54:19 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 20:54:19 +0100 Subject: [infinispan-dev] Testsuite stability Message-ID: <9efe09fe-6379-e99f-8627-5b42106c2493@redhat.com> Team, we currently have 6 failures happening on master: org.infinispan.test.hibernate.cache.commons.entity.EntityRegionAccessStrategyTest.testUpdate[non-JTA, REPL_SYNC,AccessType[read-write]] Radim is investigating this one in [1] org.infinispan.query.blackbox.CompatModeClusteredCacheTest.testMerge Gustavo/Adrian, any info on this one ? org.infinispan.query.remote.impl.ProtobufMetadataCachePreserveStateAcrossRestartsTest.testStatePreserved Adrian ? org.infinispan.spring.support.embedded.InfinispanDefaultCacheFactoryBeanContextTest.springTestContextPrepareTestInstance org.infinispan.spring.provider.sample.SampleRemoteCacheTest.springTestContextPrepareTestInstance These two are fixed by [2] org.infinispan.topology.ClusterTopologyManagerImplTest.testCoordinatorLostDuringRebalance Dan ? I think we can definitely get master green in time for 9.2.1.Final next week. [1] https://github.com/infinispan/infinispan/pull/5746 [2] https://github.com/infinispan/infinispan/pull/5803 -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From galder at redhat.com Fri Mar 2 05:36:39 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Fri, 02 Mar 2018 11:36:39 +0100 Subject: [infinispan-dev] Maintenance of OpenShift templates Message-ID: Hi, Looking at [1] and I'm wondering why the templates have to maintain a different XML file for OpenShift? We already ship an XML in the server called `cloud.xml`, that should just work. Having a separate XML file in the templates means we're duplicating the maintainance of XML files. Also, users can now create caches programmatically. This is by far the most common tweak that had to be done to the config. So, I see the urgency to change XML files less immediate. Sure, there will always be people who modify/tweak things and that's fine. We should however show the people how to do that in a way that doesn't require us to duplicate our maintanence work. Also, if we want to show the users how to use a custom XML file, I don't think we should show them how to embedd it in the template as JSON [2]. It's quite a pain. Instead, the XML should be kept as a separate file and the JSON file reference it. Cheers, [1] https://github.com/infinispan/infinispan-openshift-templates/pull/16/files [2] https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide From rory.odonnell at oracle.com Fri Mar 2 05:52:58 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 2 Mar 2018 10:52:58 +0000 Subject: [infinispan-dev] JDK 10: Release Candidate & JDK 11 Early Access builds available Message-ID: <7df8de7f-cb3d-1b65-004c-6692b856c15e@oracle.com> ? Hi Galder, *JDK 10 build 45 is our JDK 10 Release Candidate and now available at http://jdk.java.net/10/* * Schedule, status & features o http://openjdk.java.net/projects/jdk/10/ * Release Notes o http://jdk.java.net/10/release-notes * Summary of changes in b45: o JDK-8198658 - Docs still point to JDK 9 docs *JDK 11 EA build 3, under both the GPL and Oracle EA licenses, are now available at **http://jdk.java.net/11**.* * Schedule, status & features o http://openjdk.java.net/projects/jdk/11/ * Release Notes: o http://jdk.java.net/11/release-notes * Summary of changes o https://download.java.net/java/early_access/jdk11/2/jdk-11+2.html * JEPs targeted to JDK 11, so far o 309: Dynamic Class-File Constants o 318: Epsilon: An Arbitrarily Low-Overhead Garbage Collector o *320: **Remove the Java EE and CORBA Modules * ** + ** *This build includes JEP 320, so build is significantly smaller (nine fewer modules, 22 fewer megabyteson Linux/x64).* o 323: Local-Variable Syntax for Lambda Parameters * Open Source Project fixes in JDK 11 build 1 o JDK-8195096 - Apache Tomcat + Exception with custom LogManager on starting Apache Tomcat o JDK-8193802 - Apache Maven + NullPointerException from JarFileSystem.getVersionMap() o JDK-8191842 - jOOQ + JShell: Inferred type information is lost when assigning types to a "var" Finally, the Crypto roadmap was updated - 23-Feb-2018** ** * Add support for AEAD TLS Cipher Suites o Target date changed from 2018-04-17 to 2018-07-17 Regards, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180302/d63a70d6/attachment.html From rvansa at redhat.com Mon Mar 5 05:39:39 2018 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 5 Mar 2018 11:39:39 +0100 Subject: [infinispan-dev] Testsuite stability In-Reply-To: <9efe09fe-6379-e99f-8627-5b42106c2493@redhat.com> References: <9efe09fe-6379-e99f-8627-5b42106c2493@redhat.com> Message-ID: <0a145505-a9ce-979e-9cb8-015b3f45b92a@redhat.com> On 03/01/2018 08:54 PM, Tristan Tarrant wrote: > Team, > > we currently have 6 failures happening on master: > > org.infinispan.test.hibernate.cache.commons.entity.EntityRegionAccessStrategyTest.testUpdate[non-JTA, > REPL_SYNC,AccessType[read-write]] > > Radim is investigating this one in [1] Uh, how is [1] related to 2LC? Both tests start with 'Ent' but that's all... I think Galder should be investigating that failure. > > org.infinispan.query.blackbox.CompatModeClusteredCacheTest.testMerge > > Gustavo/Adrian, any info on this one ? > > org.infinispan.query.remote.impl.ProtobufMetadataCachePreserveStateAcrossRestartsTest.testStatePreserved > > Adrian ? > > org.infinispan.spring.support.embedded.InfinispanDefaultCacheFactoryBeanContextTest.springTestContextPrepareTestInstance > org.infinispan.spring.provider.sample.SampleRemoteCacheTest.springTestContextPrepareTestInstance > > These two are fixed by [2] > > org.infinispan.topology.ClusterTopologyManagerImplTest.testCoordinatorLostDuringRebalance > > Dan ? > > I think we can definitely get master green in time for 9.2.1.Final next > week. > > > [1] https://github.com/infinispan/infinispan/pull/5746 > [2] https://github.com/infinispan/infinispan/pull/5803 -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Mon Mar 5 06:30:05 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 5 Mar 2018 12:30:05 +0100 Subject: [infinispan-dev] Testsuite stability In-Reply-To: <0a145505-a9ce-979e-9cb8-015b3f45b92a@redhat.com> References: <9efe09fe-6379-e99f-8627-5b42106c2493@redhat.com> <0a145505-a9ce-979e-9cb8-015b3f45b92a@redhat.com> Message-ID: On 3/5/18 11:39 AM, Radim Vansa wrote: > On 03/01/2018 08:54 PM, Tristan Tarrant wrote: >> Team, >> >> we currently have 6 failures happening on master: >> >> org.infinispan.test.hibernate.cache.commons.entity.EntityRegionAccessStrategyTest.testUpdate[non-JTA, >> REPL_SYNC,AccessType[read-write]] >> >> Radim is investigating this one in [1] > > Uh, how is [1] related to 2LC? Both tests start with 'Ent' but that's > all... I think Galder should be investigating that failure. Damn, sleep deprivation plays tricks on me. Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From slaskawi at redhat.com Mon Mar 5 08:40:25 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 05 Mar 2018 13:40:25 +0000 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: References: Message-ID: Hey Galder, Comments inlined. Thanks, Seb On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o wrote: > Hi, > > Looking at [1] and I'm wondering why the templates have to maintain a > different XML file for OpenShift? > > We already ship an XML in the server called `cloud.xml`, that should > just work. Having a separate XML file in the templates means we're > duplicating the maintainance of XML files. > > Also, users can now create caches programmatically. This is by far the > most common tweak that had to be done to the config. So, I see the > urgency to change XML files less immediate. > So just to give you guys a bit more context - the templates were created pretty long time ago when we didn't have admin capabilities in Hot Rod and REST. The main argument for putting the whole configuration into a ConfigMap was to make configuration changes easier for the users. With ConfigMap approach they can log into OpenShift UI, go to Resources -> ConfigMaps and edit everything using UI. That's super convenient for hacking in my opinion. Of course, you don't need to do that at all if you don't want. You can just spin up a new Infinispan cluster using `oc new-app`. There are at least two other ways for changing the configuration that I can think of. The first one is S2I [1][2] (long story short, you need to put your configuration into a git repository and tell OpenShift to build an image based on it). Even though it may seem very convenient, it's OpenShift only solution (and there are no easy (out of the box) options to get this running on raw Kubernetes). I'm not judging whether it's good or bad here, just telling you how it works. The other option would be to tell the users to do exactly the same things we do in our templates themselves. In other words we would remove configuration from the templates and provide a manual for the users how to deal with configuration. I believe this is exactly what Galder is suggesting, right? Recently we implemented admin commands in the Hot Rod. Assuming that caches created this way are not wiped out during restart (that needs to be checked), we could remove the configuration from the templates and tell the users to create their caches over Hot Rod and REST. However we still need to have a back door for modifying configuration manually since there are some changes that can not be done via admin API. [1] https://github.com/openshift/source-to-image [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble > > Sure, there will always be people who modify/tweak things and that's > fine. We should however show the people how to do that in a way that > doesn't require us to duplicate our maintanence work. > If we think about further maintenance, I believe we should take more things into consideration. During the last planning meeting Tristan mentioned about bringing the project and the product closer together. On the Cloud Enablement side of things there are ongoing experiments to get a community images out. If we decided to take this direction (the CE way), our templates would need to be deprecated or will change drastically. The image will react on different set of variables and configuration options. Also, if we want to show the users how to use a custom XML file, I don't > think we should show them how to embedd it in the template as JSON > [2]. It's quite a pain. Instead, the XML should be kept as a separate > file and the JSON file reference it. > I'm still struggling to understand why this is a pain. Could you please explain it a bit more? If you look into the maintenance guide [3], there are only a few steps. For me it takes no longer than 15 minutes to do the upgrade. You also mentioned on IRC that this approach is a pain for our users (I believe you mentioned something about Ray). I also can not understand why, could you please explain it a bit more? [3] https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > Cheers, > > [1] > https://github.com/infinispan/infinispan-openshift-templates/pull/16/files > [2] > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180305/a10d76ce/attachment-0001.html From galder at redhat.com Tue Mar 6 11:11:39 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Tue, 06 Mar 2018 17:11:39 +0100 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: (Sebastian Laskawiec's message of "Mon, 05 Mar 2018 13:40:25 +0000") References: Message-ID: Sebastian Laskawiec writes: > Hey Galder, > > Comments inlined. > > Thanks, > Seb > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o > wrote: > > Hi, > > Looking at [1] and I'm wondering why the templates have to > maintain a > different XML file for OpenShift? > > We already ship an XML in the server called `cloud.xml`, that > should > just work. Having a separate XML file in the templates means we're > duplicating the maintainance of XML files. > > Also, users can now create caches programmatically. This is by far > the > most common tweak that had to be done to the config. So, I see the > urgency to change XML files less immediate. > > So just to give you guys a bit more context - the templates were > created pretty long time ago when we didn't have admin capabilities in > Hot Rod and REST. The main argument for putting the whole > configuration into a ConfigMap was to make configuration changes > easier for the users. With ConfigMap approach they can log into > OpenShift UI, go to Resources -> ConfigMaps and edit everything using > UI. That's super convenient for hacking in my opinion. Of course, you > don't need to do that at all if you don't want. You can just spin up a > new Infinispan cluster using `oc new-app`. I agree with the usability of the ConfigMap. However, the duplication is very annoying. Would it be possible for the ConfigMap to be created on the fly out of the cloud.xml that's shipped by Infinispan Server? That way we'd still have a ConfigMap without having to duplicate XML. > There are at least two other ways for changing the configuration that > I can think of. The first one is S2I [1][2] (long story short, you > need to put your configuration into a git repository and tell > OpenShift to build an image based on it). Even though it may seem very > convenient, it's OpenShift only solution (and there are no easy (out > of the box) options to get this running on raw Kubernetes). I'm not > judging whether it's good or bad here, just telling you how it works. > The other option would be to tell the users to do exactly the same > things we do in our templates themselves. In other words we would > remove configuration from the templates and provide a manual for the > users how to deal with configuration. I believe this is exactly what > Galder is suggesting, right? What we do in the templates right now to show users how to tweak their config is in convoluted. Ideally, adding their own custom configuration should be just a matter of: 1. Creating a ConfigMap yaml pointing to an XML. 2. Ask users to put their XML in a separate file pointed by the ConfigMap. 3. Deploy ConfigMap and XML. 4. Trigger a new Infinispan redeployment. Not sure how doable this is with the current template approach, or we could explain how to do this for an already up and running application that has Infinispan created out of the default template? > > Recently we implemented admin commands in the Hot Rod. Assuming that > caches created this way are not wiped out during restart (that needs > to be checked), we could remove the configuration from the templates > and tell the users to create their caches over Hot Rod and REST. > However we still need to have a back door for modifying configuration > manually since there are some changes that can not be done via admin > API. > > [1] https://github.com/openshift/source-to-image > [2] > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble > > > Sure, there will always be people who modify/tweak things and > that's > fine. We should however show the people how to do that in a way > that > doesn't require us to duplicate our maintanence work. > > If we think about further maintenance, I believe we should take more > things into consideration. During the last planning meeting Tristan > mentioned about bringing the project and the product closer together. > On the Cloud Enablement side of things there are ongoing experiments > to get a community images out. > > If we decided to take this direction (the CE way), our templates would > need to be deprecated or will change drastically. The image will react > on different set of variables and configuration options. > > Also, if we want to show the users how to use a custom XML file, I > don't > think we should show them how to embedd it in the template as JSON > [2]. It's quite a pain. Instead, the XML should be kept as a > separate > file and the JSON file reference it. > > I'm still struggling to understand why this is a pain. Could you > please explain it a bit more? If you look into the maintenance guide > [3], there are only a few steps. For me it takes no longer than 15 > minutes to do the upgrade. You also mentioned on IRC that this > approach is a pain for our users (I believe you mentioned something > about Ray). I also can not understand why, could you please explain it > a bit more? > > [3] > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > Cheers, > > [1] > https://github.com/infinispan/infinispan-openshift-templates/pull/16/files > > [2] > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Mar 6 11:58:50 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Tue, 06 Mar 2018 17:58:50 +0100 Subject: [infinispan-dev] Wildfly Clustering on OpenShift In-Reply-To: <2.bb140d5a996691081617@NY-WEB01> (Stack Exchange's message of "Thu, 01 Mar 2018 09:18:45 +0000") References: <2.bb140d5a996691081617@NY-WEB01> Message-ID: Are there any configurations out of box inside Wildfly for clustering on OpenShift? It'd need a transport that uses kube ping? "Stack Exchange" writes: > Stack Exchange > > Stack Exchange > > * The following item was added to your Stack Exchange > "infinispan-user" feed. > Stack Overflow Infinispan replicated cache not replicating > objects for read > > We are trying to install a replicated cache > across two infinispan nodes running on > Wildfly 11 inside of Openshift. When we write > an object on one node it doesn't show up on > the other node for reading. ... > > tagged: java, wildfly, Mar 1 at 9:13 > infinispan, infinispan-9 > Unsubscribe from this filter or change your > email preferences by visiting your filter > subscriptions page on stackexchange.com. > > Questions? Comments? Let us know on our feedback site. > > Stack Exchange Inc. 110 William Street, 28th floor, NY NY 10038 <3 > Stack Exchange > * > Stack Overflow From slaskawi at redhat.com Wed Mar 7 03:11:54 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 07 Mar 2018 08:11:54 +0000 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: References: Message-ID: On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarre?o wrote: > Sebastian Laskawiec writes: > > > Hey Galder, > > > > Comments inlined. > > > > Thanks, > > Seb > > > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o > > wrote: > > > > Hi, > > > > Looking at [1] and I'm wondering why the templates have to > > maintain a > > different XML file for OpenShift? > > > > We already ship an XML in the server called `cloud.xml`, that > > should > > just work. Having a separate XML file in the templates means we're > > duplicating the maintainance of XML files. > > > > Also, users can now create caches programmatically. This is by far > > the > > most common tweak that had to be done to the config. So, I see the > > urgency to change XML files less immediate. > > > > So just to give you guys a bit more context - the templates were > > created pretty long time ago when we didn't have admin capabilities in > > Hot Rod and REST. The main argument for putting the whole > > configuration into a ConfigMap was to make configuration changes > > easier for the users. With ConfigMap approach they can log into > > OpenShift UI, go to Resources -> ConfigMaps and edit everything using > > UI. That's super convenient for hacking in my opinion. Of course, you > > don't need to do that at all if you don't want. You can just spin up a > > new Infinispan cluster using `oc new-app`. > > I agree with the usability of the ConfigMap. However, the duplication is > very annoying. Would it be possible for the ConfigMap to be created on > the fly out of the cloud.xml that's shipped by Infinispan Server? That > way we'd still have a ConfigMap without having to duplicate XML. > Probably not. This would require special permissions to call Kubernetes API from the Pod. In other words, I can't think about any other way that would work in OpenShift Online for the instance. > > > There are at least two other ways for changing the configuration that > > I can think of. The first one is S2I [1][2] (long story short, you > > need to put your configuration into a git repository and tell > > OpenShift to build an image based on it). Even though it may seem very > > convenient, it's OpenShift only solution (and there are no easy (out > > of the box) options to get this running on raw Kubernetes). I'm not > > judging whether it's good or bad here, just telling you how it works. > > The other option would be to tell the users to do exactly the same > > things we do in our templates themselves. In other words we would > > remove configuration from the templates and provide a manual for the > > users how to deal with configuration. I believe this is exactly what > > Galder is suggesting, right? > > What we do in the templates right now to show users how to tweak their > config is in convoluted. > > Ideally, adding their own custom configuration should be just a matter > of: > > 1. Creating a ConfigMap yaml pointing to an XML. > 2. Ask users to put their XML in a separate file pointed by the ConfigMap. > 3. Deploy ConfigMap and XML. > 4. Trigger a new Infinispan redeployment. > That would probably need to be a new deployment. Most of the StatefulSet spec is immutable. > > Not sure how doable this is with the current template approach, or we > could explain how to do this for an already up and running application > that has Infinispan created out of the default template? > I've been thinking about this for a while and this is what I think we should do: 1. Wait a couple of weeks and review the community image created by the CE Team. See if this is a good fit for us. If it is, I would focus on adopting this approach and adjust our templates to handle it. 2. Whether or not we adopt the CE community work, we could put all necessary stuff into cloud.xml or services.xml configuration. We could do one step forward and merge them together. 3. Make sure that dynamically created caches are persisted (this is super important!!) 4. Once #3 is verified we should have a decision whether or not we are adopting the CE way. At this point we could document how to use custom configuration with a ConfigMap and drop it from the templates. WDYT? Does this plan makes sense to you? > > > > > Recently we implemented admin commands in the Hot Rod. Assuming that > > caches created this way are not wiped out during restart (that needs > > to be checked), we could remove the configuration from the templates > > and tell the users to create their caches over Hot Rod and REST. > > However we still need to have a back door for modifying configuration > > manually since there are some changes that can not be done via admin > > API. > > > > [1] https://github.com/openshift/source-to-image > > [2] > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble > > > > > > Sure, there will always be people who modify/tweak things and > > that's > > fine. We should however show the people how to do that in a way > > that > > doesn't require us to duplicate our maintanence work. > > > > If we think about further maintenance, I believe we should take more > > things into consideration. During the last planning meeting Tristan > > mentioned about bringing the project and the product closer together. > > On the Cloud Enablement side of things there are ongoing experiments > > to get a community images out. > > > > If we decided to take this direction (the CE way), our templates would > > need to be deprecated or will change drastically. The image will react > > on different set of variables and configuration options. > > > > Also, if we want to show the users how to use a custom XML file, I > > don't > > think we should show them how to embedd it in the template as JSON > > [2]. It's quite a pain. Instead, the XML should be kept as a > > separate > > file and the JSON file reference it. > > > > I'm still struggling to understand why this is a pain. Could you > > please explain it a bit more? If you look into the maintenance guide > > [3], there are only a few steps. For me it takes no longer than 15 > > minutes to do the upgrade. You also mentioned on IRC that this > > approach is a pain for our users (I believe you mentioned something > > about Ray). I also can not understand why, could you please explain it > > a bit more? > > > > [3] > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > > > Cheers, > > > > [1] > > > https://github.com/infinispan/infinispan-openshift-templates/pull/16/files > > > > [2] > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180307/e59a9d2f/attachment-0001.html From galder at redhat.com Wed Mar 7 06:14:50 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Wed, 07 Mar 2018 12:14:50 +0100 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: (Sebastian Laskawiec's message of "Wed, 07 Mar 2018 08:11:54 +0000") References: Message-ID: Sebastian Laskawiec writes: > On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarre?o > wrote: > > Sebastian Laskawiec writes: > > > Hey Galder, > > > > Comments inlined. > > > > Thanks, > > Seb > > > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o > > > wrote: > > > > Hi, > > > > Looking at [1] and I'm wondering why the templates have to > > maintain a > > different XML file for OpenShift? > > > > We already ship an XML in the server called `cloud.xml`, that > > should > > just work. Having a separate XML file in the templates means > we're > > duplicating the maintainance of XML files. > > > > Also, users can now create caches programmatically. This is by > far > > the > > most common tweak that had to be done to the config. So, I see > the > > urgency to change XML files less immediate. > > > > So just to give you guys a bit more context - the templates were > > created pretty long time ago when we didn't have admin > capabilities in > > Hot Rod and REST. The main argument for putting the whole > > configuration into a ConfigMap was to make configuration changes > > easier for the users. With ConfigMap approach they can log into > > OpenShift UI, go to Resources -> ConfigMaps and edit everything > using > > UI. That's super convenient for hacking in my opinion. Of > course, you > > don't need to do that at all if you don't want. You can just > spin up a > > new Infinispan cluster using `oc new-app`. > > I agree with the usability of the ConfigMap. However, the > duplication is > very annoying. Would it be possible for the ConfigMap to be > created on > the fly out of the cloud.xml that's shipped by Infinispan Server? > That > way we'd still have a ConfigMap without having to duplicate XML. > > Probably not. This would require special permissions to call > Kubernetes API from the Pod. In other words, I can't think about any > other way that would work in OpenShift Online for the instance. > > > There are at least two other ways for changing the configuration > that > > I can think of. The first one is S2I [1][2] (long story short, > you > > need to put your configuration into a git repository and tell > > OpenShift to build an image based on it). Even though it may > seem very > > convenient, it's OpenShift only solution (and there are no easy > (out > > of the box) options to get this running on raw Kubernetes). I'm > not > > judging whether it's good or bad here, just telling you how it > works. > > The other option would be to tell the users to do exactly the > same > > things we do in our templates themselves. In other words we > would > > remove configuration from the templates and provide a manual for > the > > users how to deal with configuration. I believe this is exactly > what > > Galder is suggesting, right? > > What we do in the templates right now to show users how to tweak > their > config is in convoluted. > > Ideally, adding their own custom configuration should be just a > matter > of: > > 1. Creating a ConfigMap yaml pointing to an XML. > 2. Ask users to put their XML in a separate file pointed by the > ConfigMap. > 3. Deploy ConfigMap and XML. > 4. Trigger a new Infinispan redeployment. > > That would probably need to be a new deployment. Most of the > StatefulSet spec is immutable. > > Not sure how doable this is with the current template approach, or > we > could explain how to do this for an already up and running > application > that has Infinispan created out of the default template? > > I've been thinking about this for a while and this is what I think we > should do: > > 1 Wait a couple of weeks and review the community image created by the > CE Team. See if this is a good fit for us. If it is, I would focus > on adopting this approach and adjust our templates to handle it. > 2 Whether or not we adopt the CE community work, we could put all > necessary stuff into cloud.xml or services.xml configuration. We > could do one step forward and merge them together. > 3 Make sure that dynamically created caches are persisted (this is > super important!!) > 4 Once #3 is verified we should have a decision whether or not we are > adopting the CE way. At this point we could document how to use > custom configuration with a ConfigMap and drop it from the > templates. > > WDYT? Does this plan makes sense to you? Sounds good > > > > > Recently we implemented admin commands in the Hot Rod. Assuming > that > > caches created this way are not wiped out during restart (that > needs > > to be checked), we could remove the configuration from the > templates > > and tell the users to create their caches over Hot Rod and REST. > > However we still need to have a back door for modifying > configuration > > manually since there are some changes that can not be done via > admin > > API. > > > > [1] https://github.com/openshift/source-to-image > > [2] > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble > > > > > > > Sure, there will always be people who modify/tweak things and > > that's > > fine. We should however show the people how to do that in a way > > that > > doesn't require us to duplicate our maintanence work. > > > > If we think about further maintenance, I believe we should take > more > > things into consideration. During the last planning meeting > Tristan > > mentioned about bringing the project and the product closer > together. > > On the Cloud Enablement side of things there are ongoing > experiments > > to get a community images out. > > > > If we decided to take this direction (the CE way), our templates > would > > need to be deprecated or will change drastically. The image will > react > > on different set of variables and configuration options. > > > > Also, if we want to show the users how to use a custom XML file, > I > > don't > > think we should show them how to embedd it in the template as > JSON > > [2]. It's quite a pain. Instead, the XML should be kept as a > > separate > > file and the JSON file reference it. > > > > I'm still struggling to understand why this is a pain. Could you > > please explain it a bit more? If you look into the maintenance > guide > > [3], there are only a few steps. For me it takes no longer than > 15 > > minutes to do the upgrade. You also mentioned on IRC that this > > approach is a pain for our users (I believe you mentioned > something > > about Ray). I also can not understand why, could you please > explain it > > a bit more? > > > > [3] > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > > > > Cheers, > > > > [1] > > > https://github.com/infinispan/infinispan-openshift-templates/pull/16/files > > > > > [2] > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Wed Mar 7 08:11:53 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 07 Mar 2018 13:11:53 +0000 Subject: [infinispan-dev] HTTP/2 and REST Message-ID: Hey Infinispan Community, I've just published a blog post about HTTP/2 support in Infinispan: http://bit.ly/infinispan-http2 Have fun and drop me a message if you have any comments. Thanks, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180307/520aec94/attachment.html From slaskawi at redhat.com Thu Mar 8 04:49:46 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 08 Mar 2018 09:49:46 +0000 Subject: [infinispan-dev] Public cluster discovery service Message-ID: Hey Bela, I've just stumbled upon this: https://coreos.com/os/docs/latest/cluster-discovery.html The Etcd folks created a public discovery service. You need to use a token and get a discovery string back. I believe that's super, super useful for demos across multiple public clouds. What do you think about that? Perhaps we could implement an ETCD_PING and just reuse their service or write our own? Thanks, Seb -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180308/1a5a86a0/attachment.html From belaban at mailbox.org Thu Mar 8 05:47:02 2018 From: belaban at mailbox.org (Bela Ban) Date: Thu, 8 Mar 2018 11:47:02 +0100 Subject: [infinispan-dev] Public cluster discovery service In-Reply-To: References: Message-ID: <26fac1ec-370a-fd57-4157-47227e189643@mailbox.org> On 08/03/18 10:49, Sebastian Laskawiec wrote: > Hey Bela, > > I've just stumbled upon this: > https://coreos.com/os/docs/latest/cluster-discovery.html > > The Etcd folks created a public discovery service. You need to use a > token and get a discovery string back. I believe that's super, super > useful for demos across multiple public clouds. Why? This is conceptually the same as running a GossipRouter on a public, DNS-mapped, IP address... The real challenge with cross-cloud clusters is (as you and I discovered) to bridge the non-public addresses of local cloud members with members running in different clouds. Unless you make all members use public IP addresses, but that's not something that's typically advised in a cloud env. > What do you think about that? Perhaps we could implement an ETCD_PING > and just reuse their service or write our own? Sure, should be simple. But - again - what's the goal? If discovery.etcd.io can be used as a public *permanent* discovery service, yes, cool > Thanks, > Seb > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban | http://www.jgroups.org From slaskawi at redhat.com Fri Mar 9 05:26:21 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 09 Mar 2018 10:26:21 +0000 Subject: [infinispan-dev] Public cluster discovery service In-Reply-To: <26fac1ec-370a-fd57-4157-47227e189643@mailbox.org> References: <26fac1ec-370a-fd57-4157-47227e189643@mailbox.org> Message-ID: On Thu, Mar 8, 2018 at 11:47 AM Bela Ban wrote: > > > On 08/03/18 10:49, Sebastian Laskawiec wrote: > > Hey Bela, > > > > I've just stumbled upon this: > > https://coreos.com/os/docs/latest/cluster-discovery.html > > > > The Etcd folks created a public discovery service. You need to use a > > token and get a discovery string back. I believe that's super, super > > useful for demos across multiple public clouds. > > > Why? This is conceptually the same as running a GossipRouter on a > public, DNS-mapped, IP address... > > > The real challenge with cross-cloud clusters is (as you and I > discovered) to bridge the non-public addresses of local cloud members > with members running in different clouds. > I totally agree with you here. It's pretty bad that there is no way for the Pod to learn what is the external Load Balancer address that exposes it. The only way I can see to fix this is to write a very small application which will do this mapping. Then the app should use PodInjectionPolicy [1] (or a similar Admission Controller [2]) So back to the publicly available GossipRouter - I still believe there is a potential in this solution and we should create a small tutorial telling users how to do it (maybe a template for OpenShift?). But granted - Admission Controller work (the mapper I mentioned the above) is by far more important. [1] https://kubernetes.io/docs/tasks/inject-data-application/podpreset/ [2] https://kubernetes.io/docs/admin/admission-controllers/ > > Unless you make all members use public IP addresses, but that's not > something that's typically advised in a cloud env. > > > > What do you think about that? Perhaps we could implement an ETCD_PING > > and just reuse their service or write our own? > > Sure, should be simple. But - again - what's the goal? If > discovery.etcd.io can be used as a public *permanent* discovery service, > yes, cool > You convinced me - GossipRouter is the right way to go here. > > > Thanks, > > Seb > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180309/ebd8bcde/attachment-0001.html From sanne at infinispan.org Thu Mar 15 16:44:41 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 15 Mar 2018 20:44:41 +0000 Subject: [infinispan-dev] [OGM] Reasonable file safe storage defaults? Message-ID: Hi all, we're updating Hibernate OGM to Infinispan 9.2, using now persisted counters. To help people getting started, we include a default Infinispan configuration which is suitable for OGM: - clustering enabled - transactions enabled - counters enabled Of course being a general purpose default it won't be perfect but I'd like it to be a fairly safe default to give people confidence. What would you all think regarding: # Using /tmp? We might want to pick an "Infinispan default path, something like "/var/infinispan" for unix systems? This might complicate the "easy default" as it would then require user permissions to write in such a path. # Is it consistent? Does it even make sense to store the counters when we have no default cachestore for the rest of the data? Should we enable a File CacheStore by default, which path to use? # Should Infinispan have such defaults? Rather than coding in OGM an attempt to write to /var/infinispan followed with user friendly warnings/errors and possibly a fallback, should Infinispan include such logic? Thanks, Sanne From rory.odonnell at oracle.com Wed Mar 21 06:56:30 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Wed, 21 Mar 2018 10:56:30 +0000 Subject: [infinispan-dev] Release Announcement: General Availability of JDK 10 Message-ID: Hi Galder, A number of items to share with you today : *1) JDK 10 General Availability * JDK 10, the first release produced under the six-month rapid-cadence release model [1][2], is now Generally Available. We've identified no P1 bugs since we promoted build 46 almost two weeks ago, so that is the official GA release, ready for production use. GPL'd binaries from Oracle are available here: http://jdk.java.net/10 This release includes twelve features: * 286: Local-Variable Type Inference * 296: Consolidate the JDK Forest into a Single Repository * 304: Garbage-Collector Interface * 307: Parallel Full GC for G1 * 310: Application Class-Data Sharing * 312: Thread-Local Handshakes * 313: Remove the Native-Header Generation Tool (javah) * 314: Additional Unicode Language-Tag Extensions * 316: Heap Allocation on Alternative Memory Devices * 317: Experimental Java-Based JIT Compiler * 319: Root Certificates * 322: Time-Based Release Versioning *2) JDK 11 EA build 5, under both the GPL and Oracle EA licenses, are now available at **http://jdk.java.net/11**.* * Schedule, status & features o http://openjdk.java.net/projects/jdk/11/ * Release Notes: o http://jdk.java.net/11/release-notes * Summary of changes o https://download.java.net/java/early_access/jdk11/5/jdk-11+5.html *3) The Z Garbage Collector Project, early access builds available : * The first EA binary from from The Z Garbage Collector Project, also known as ZGC, is now available. ZGC is a scalable low latency garbage collector. For information on how to enable and use ZGC, please see the project wiki. * Project page: http://openjdk.java.net/projects/zgc/ * Wiki: https://wiki.openjdk.java.net/display/zgc/Main *4) Quality Outreach Report for **March 2018 **is available * * https://wiki.openjdk.java.net/display/quality/Quality+Outreach+report+March+2018 *5) **Java Client Roadmap Update * * We posted a blog [3] and related white paper [4] detailing our plans for the Java Client. Rgds,Rory [1] https://mreinhold.org/blog/forward-faster [2] http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html [3] Blog: https://blogs.oracle.com/java-platform-group/the-future-of-javafx-and-other-java-client-roadmap-updates [4] Whitepaper: http://www.oracle.com/technetwork/java/javase/javaclientroadmapupdate2018mar-4414431.pdf -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180321/84b9db01/attachment.html From tsegismont at gmail.com Wed Mar 21 12:16:01 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Wed, 21 Mar 2018 17:16:01 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown Message-ID: Hi everyone, I am working on integrating Infinispan 9.2.Final in vertx-infinispan. Before merging I wanted to make sure the test suite passed but it doesn't. It's not the always the same test involved. In the logs, I see a lot of messages like "After merge (or coordinator change), cache still hasn't recovered a majority of members and must stay in degraded mode. The context involved are "___counter_configuration" and "org.infinispan.LOCKS" Most often it's harmless but, sometimes, I also see this exception "ISPN000210: Failed to request state of cache" Again the cache involved is either "___counter_configuration" or "org.infinispan.LOCKS" After this exception, the cache manager is unable to stop. It blocks in method "terminate" (join on cache future). I thought the test suite was too rough (we stop all nodes at the same time). So I changed it to make sure that: - nodes start one after the other - a new node is started only when the previous one indicates HEALTHY status - nodes stop one after the other - a node is stopped only when it indicates HEALTHY status Pretty much what we do on Kubernetes for the readiness check actually. But it didn't get any better. Attached are the logs of such a failing test. Note that the Vert.x test itself does not fail, it's only when closing nodes that we have issues. Here's our XML config: https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml Does that ring a bell? Do you need more info? Regards, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180321/c91f49d8/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: InfinispanClusteredEventBusTest#testPublishByteArray.log Type: text/x-log Size: 153822 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180321/c91f49d8/attachment-0001.bin From karesti at redhat.com Wed Mar 21 16:18:38 2018 From: karesti at redhat.com (Katia Aresti) Date: Wed, 21 Mar 2018 21:18:38 +0100 Subject: [infinispan-dev] 3rd article about Vert.x and Infinispan Message-ID: Hi all, I've published a 3rd article about Vert.x and Infinispan. The cluster manager using version 9.2 won't be released soon, so I'm using 9.1 to showcase and when the version using 9.2 will be released I will update the code of the tutorial. http://blog.infinispan.org/2018/03/clustering-vertx-with-infinispan.html If you see anything that should be changed, please, contact me ! Cheers, Katia -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180321/c1bfcd80/attachment.html From slaskawi at redhat.com Thu Mar 22 09:50:02 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 22 Mar 2018 13:50:02 +0000 Subject: [infinispan-dev] 3rd article about Vert.x and Infinispan In-Reply-To: References: Message-ID: Very nice! Thanks Katia! BTW, what kind of tool did you use for drawings? Thanks, Seb On Wed, Mar 21, 2018 at 9:21 PM Katia Aresti wrote: > Hi all, > > I've published a 3rd article about Vert.x and Infinispan. > > The cluster manager using version 9.2 won't be released soon, so I'm using > 9.1 to showcase and when the version using 9.2 will be released I will > update the code of the tutorial. > > http://blog.infinispan.org/2018/03/clustering-vertx-with-infinispan.html > > If you see anything that should be changed, please, contact me ! > > Cheers, > > Katia > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180322/edafc048/attachment.html From karesti at redhat.com Thu Mar 22 14:15:59 2018 From: karesti at redhat.com (Katia Aresti) Date: Thu, 22 Mar 2018 19:15:59 +0100 Subject: [infinispan-dev] 3rd article about Vert.x and Infinispan In-Reply-To: References: Message-ID: Thank you Sebastian ! if you mean the app schema, google slides ! :) On Thu, Mar 22, 2018 at 2:50 PM, Sebastian Laskawiec wrote: > Very nice! Thanks Katia! > > BTW, what kind of tool did you use for drawings? > > Thanks, > Seb > > On Wed, Mar 21, 2018 at 9:21 PM Katia Aresti wrote: > >> Hi all, >> >> I've published a 3rd article about Vert.x and Infinispan. >> >> The cluster manager using version 9.2 won't be released soon, so I'm >> using 9.1 to showcase and when the version using 9.2 will be released I >> will update the code of the tutorial. >> >> http://blog.infinispan.org/2018/03/clustering-vertx-with-infinispan.html >> >> If you see anything that should be changed, please, contact me ! >> >> Cheers, >> >> Katia >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180322/7c9c96cb/attachment.html From pedro at infinispan.org Fri Mar 23 08:25:34 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 23 Mar 2018 12:25:34 +0000 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: Message-ID: Hi Thomas, Is the test in question using any counter/lock? I did see similar behavior with the counter's in our server test suite. The partition handling makes the cache degraded because nodes are starting and stopping concurrently. I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know? If there is none, it should be created. I improved the counters by making the cache start lazily when you first get or define a counter [1]. This workaround solved the issue for us. As a workaround for your test suite, I suggest to make sure the caches (___counter_configuration and org.infinispan.LOCK) have finished their state transfer before stopping the cache managers, by invoking DefaultCacheManager.getCache(*cache-name*) in all the caches managers. Sorry for the inconvenience and the delay in replying. Cheers, Pedro [1] https://issues.jboss.org/browse/ISPN-8860 On 21-03-2018 16:16, Thomas SEGISMONT wrote: > Hi everyone, > > I am working on integrating Infinispan 9.2.Final in vertx-infinispan. > Before merging I wanted to make sure the test suite passed but it > doesn't. It's not the always the same test involved. > > In the logs, I see a lot of messages like "After merge (or coordinator > change), cache still hasn't recovered a majority of members and must > stay in degraded mode. > The context involved are "___counter_configuration" and > "org.infinispan.LOCKS" > > Most often it's harmless but, sometimes, I also see this exception > "ISPN000210: Failed to request state of cache" > Again the cache involved is either "___counter_configuration" or > "org.infinispan.LOCKS" > After this exception, the cache manager is unable to stop. It blocks in > method "terminate" (join on cache future). > > I thought the test suite was too rough (we stop all nodes at the same > time). So I changed it to make sure that: > - nodes start one after the other > - a new node is started only when the previous one indicates HEALTHY status > - nodes stop one after the other > - a node is stopped only when it indicates HEALTHY status > Pretty much what we do on Kubernetes for the readiness check actually. > But it didn't get any better. > > Attached are the logs of such a failing test. > > Note that the Vert.x test itself does not fail, it's only when closing > nodes that we have issues. > > Here's our XML config: > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml > > Does that ring a bell? Do you need more info? > > Regards, > Thomas > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rvansa at redhat.com Fri Mar 23 09:41:47 2018 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 23 Mar 2018 14:41:47 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: Message-ID: This looks similar to [1] which has a fix [2] ready for a while. Please try with it to see if it solves your problem. [1] https://issues.jboss.org/browse/ISPN-8859 [2] https://github.com/infinispan/infinispan/pull/5786 On 03/23/2018 01:25 PM, Pedro Ruivo wrote: > Hi Thomas, > > Is the test in question using any counter/lock? > > I did see similar behavior with the counter's in our server test suite. > The partition handling makes the cache degraded because nodes are > starting and stopping concurrently. > > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know? > If there is none, it should be created. > > I improved the counters by making the cache start lazily when you first > get or define a counter [1]. This workaround solved the issue for us. > > As a workaround for your test suite, I suggest to make sure the caches > (___counter_configuration and org.infinispan.LOCK) have finished their > state transfer before stopping the cache managers, by invoking > DefaultCacheManager.getCache(*cache-name*) in all the caches managers. > > Sorry for the inconvenience and the delay in replying. > > Cheers, > Pedro > > [1] https://issues.jboss.org/browse/ISPN-8860 > > On 21-03-2018 16:16, Thomas SEGISMONT wrote: >> Hi everyone, >> >> I am working on integrating Infinispan 9.2.Final in vertx-infinispan. >> Before merging I wanted to make sure the test suite passed but it >> doesn't. It's not the always the same test involved. >> >> In the logs, I see a lot of messages like "After merge (or coordinator >> change), cache still hasn't recovered a majority of members and must >> stay in degraded mode. >> The context involved are "___counter_configuration" and >> "org.infinispan.LOCKS" >> >> Most often it's harmless but, sometimes, I also see this exception >> "ISPN000210: Failed to request state of cache" >> Again the cache involved is either "___counter_configuration" or >> "org.infinispan.LOCKS" >> After this exception, the cache manager is unable to stop. It blocks in >> method "terminate" (join on cache future). >> >> I thought the test suite was too rough (we stop all nodes at the same >> time). So I changed it to make sure that: >> - nodes start one after the other >> - a new node is started only when the previous one indicates HEALTHY status >> - nodes stop one after the other >> - a node is stopped only when it indicates HEALTHY status >> Pretty much what we do on Kubernetes for the readiness check actually. >> But it didn't get any better. >> >> Attached are the logs of such a failing test. >> >> Note that the Vert.x test itself does not fail, it's only when closing >> nodes that we have issues. >> >> Here's our XML config: >> https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml >> >> Does that ring a bell? Do you need more info? >> >> Regards, >> Thomas >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From tsegismont at gmail.com Fri Mar 23 11:06:37 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Fri, 23 Mar 2018 16:06:37 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: Message-ID: Hi Pedro, 2018-03-23 13:25 GMT+01:00 Pedro Ruivo : > Hi Thomas, > > Is the test in question using any counter/lock? > I have seen the problem on a test for counters, on another one for locks, as well as well as caches only. But Vert.x starts the ClusteredLockManager and the CounterManager in all cases (even if no lock/counter is created/used) > > I did see similar behavior with the counter's in our server test suite. > The partition handling makes the cache degraded because nodes are > starting and stopping concurrently. > As for me I was able to observe the problem even when stopping nodes one after the other and waiting for cluster to go back to HEALTHY status. Is it possible that the status of the counter and lock caches are not taken into account in cluster health? > > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know? > If there is none, it should be created. > > I improved the counters by making the cache start lazily when you first > get or define a counter [1]. This workaround solved the issue for us. > > As a workaround for your test suite, I suggest to make sure the caches > (___counter_configuration and org.infinispan.LOCK) have finished their > state transfer before stopping the cache managers, by invoking > DefaultCacheManager.getCache(*cache-name*) in all the caches managers. > > Sorry for the inconvenience and the delay in replying. > No problem. > > Cheers, > Pedro > > [1] https://issues.jboss.org/browse/ISPN-8860 > > On 21-03-2018 16:16, Thomas SEGISMONT wrote: > > Hi everyone, > > > > I am working on integrating Infinispan 9.2.Final in vertx-infinispan. > > Before merging I wanted to make sure the test suite passed but it > > doesn't. It's not the always the same test involved. > > > > In the logs, I see a lot of messages like "After merge (or coordinator > > change), cache still hasn't recovered a majority of members and must > > stay in degraded mode. > > The context involved are "___counter_configuration" and > > "org.infinispan.LOCKS" > > > > Most often it's harmless but, sometimes, I also see this exception > > "ISPN000210: Failed to request state of cache" > > Again the cache involved is either "___counter_configuration" or > > "org.infinispan.LOCKS" > > After this exception, the cache manager is unable to stop. It blocks in > > method "terminate" (join on cache future). > > > > I thought the test suite was too rough (we stop all nodes at the same > > time). So I changed it to make sure that: > > - nodes start one after the other > > - a new node is started only when the previous one indicates HEALTHY > status > > - nodes stop one after the other > > - a node is stopped only when it indicates HEALTHY status > > Pretty much what we do on Kubernetes for the readiness check actually. > > But it didn't get any better. > > > > Attached are the logs of such a failing test. > > > > Note that the Vert.x test itself does not fail, it's only when closing > > nodes that we have issues. > > > > Here's our XML config: > > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/ > src/main/resources/default-infinispan.xml > > > > Does that ring a bell? Do you need more info? > > > > Regards, > > Thomas > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180323/04d25941/attachment.html From tsegismont at gmail.com Fri Mar 23 11:07:06 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Fri, 23 Mar 2018 16:07:06 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: Message-ID: I'll give it a try and keep you posted. Thanks 2018-03-23 14:41 GMT+01:00 Radim Vansa : > This looks similar to [1] which has a fix [2] ready for a while. Please > try with it to see if it solves your problem. > > [1] https://issues.jboss.org/browse/ISPN-8859 > [2] https://github.com/infinispan/infinispan/pull/5786 > > On 03/23/2018 01:25 PM, Pedro Ruivo wrote: > > Hi Thomas, > > > > Is the test in question using any counter/lock? > > > > I did see similar behavior with the counter's in our server test suite. > > The partition handling makes the cache degraded because nodes are > > starting and stopping concurrently. > > > > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know? > > If there is none, it should be created. > > > > I improved the counters by making the cache start lazily when you first > > get or define a counter [1]. This workaround solved the issue for us. > > > > As a workaround for your test suite, I suggest to make sure the caches > > (___counter_configuration and org.infinispan.LOCK) have finished their > > state transfer before stopping the cache managers, by invoking > > DefaultCacheManager.getCache(*cache-name*) in all the caches managers. > > > > Sorry for the inconvenience and the delay in replying. > > > > Cheers, > > Pedro > > > > [1] https://issues.jboss.org/browse/ISPN-8860 > > > > On 21-03-2018 16:16, Thomas SEGISMONT wrote: > >> Hi everyone, > >> > >> I am working on integrating Infinispan 9.2.Final in vertx-infinispan. > >> Before merging I wanted to make sure the test suite passed but it > >> doesn't. It's not the always the same test involved. > >> > >> In the logs, I see a lot of messages like "After merge (or coordinator > >> change), cache still hasn't recovered a majority of members and must > >> stay in degraded mode. > >> The context involved are "___counter_configuration" and > >> "org.infinispan.LOCKS" > >> > >> Most often it's harmless but, sometimes, I also see this exception > >> "ISPN000210: Failed to request state of cache" > >> Again the cache involved is either "___counter_configuration" or > >> "org.infinispan.LOCKS" > >> After this exception, the cache manager is unable to stop. It blocks in > >> method "terminate" (join on cache future). > >> > >> I thought the test suite was too rough (we stop all nodes at the same > >> time). So I changed it to make sure that: > >> - nodes start one after the other > >> - a new node is started only when the previous one indicates HEALTHY > status > >> - nodes stop one after the other > >> - a node is stopped only when it indicates HEALTHY status > >> Pretty much what we do on Kubernetes for the readiness check actually. > >> But it didn't get any better. > >> > >> Attached are the logs of such a failing test. > >> > >> Note that the Vert.x test itself does not fail, it's only when closing > >> nodes that we have issues. > >> > >> Here's our XML config: > >> https://github.com/vert-x3/vertx-infinispan/blob/ispn92/ > src/main/resources/default-infinispan.xml > >> > >> Does that ring a bell? Do you need more info? > >> > >> Regards, > >> Thomas > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180323/89f997c2/attachment.html From pedro at infinispan.org Mon Mar 26 07:16:16 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 26 Mar 2018 12:16:16 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: Message-ID: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> On 23-03-2018 15:06, Thomas SEGISMONT wrote: > Hi Pedro, > > 2018-03-23 13:25 GMT+01:00 Pedro Ruivo >: > > Hi Thomas, > > Is the test in question using any counter/lock? > > > I have seen the problem on a test for counters, on another one for > locks, as well as well as caches only. > But Vert.x starts the ClusteredLockManager and the CounterManager in all > cases (even if no lock/counter is created/used) > > > I did see similar behavior with the counter's in our server test suite. > The partition handling makes the cache degraded because nodes are > starting and stopping concurrently. > > > As for me I was able to observe the problem even when stopping nodes one > after the other and waiting for cluster to go back to HEALTHY status. > Is it possible that the status of the counter and lock caches are not > taken into account in cluster health? The counter and lock caches are private. So, they aren't in the cluster health neither their name are returned by getCacheNames() method. > > > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know? > If there is none, it should be created. > > I improved the counters by making the cache start lazily when you first > get or define a counter [1]. This workaround solved the issue for us. > > As a workaround for your test suite, I suggest to make sure the caches > (___counter_configuration and org.infinispan.LOCK) have finished their > state transfer before stopping the cache managers, by invoking > DefaultCacheManager.getCache(*cache-name*) in all the caches managers. > > Sorry for the inconvenience and the delay in replying. > > > No problem. > > > Cheers, > Pedro > > [1] https://issues.jboss.org/browse/ISPN-8860 > > > On 21-03-2018 16:16, Thomas SEGISMONT wrote: > > Hi everyone, > > > > I am working on integrating Infinispan 9.2.Final in vertx-infinispan. > > Before merging I wanted to make sure the test suite passed but it > > doesn't. It's not the always the same test involved. > > > > In the logs, I see a lot of messages like "After merge (or > coordinator > > change), cache still hasn't recovered a majority of members and must > > stay in degraded mode. > > The context involved are "___counter_configuration" and > > "org.infinispan.LOCKS" > > > > Most often it's harmless but, sometimes, I also see this exception > > "ISPN000210: Failed to request state of cache" > > Again the cache involved is either "___counter_configuration" or > > "org.infinispan.LOCKS" > > After this exception, the cache manager is unable to stop. It > blocks in > > method "terminate" (join on cache future). > > > > I thought the test suite was too rough (we stop all nodes at the same > > time). So I changed it to make sure that: > > - nodes start one after the other > > - a new node is started only when the previous one indicates > HEALTHY status > > - nodes stop one after the other > > - a node is stopped only when it indicates HEALTHY status > > Pretty much what we do on Kubernetes for the readiness check > actually. > > But it didn't get any better. > > > > Attached are the logs of such a failing test. > > > > Note that the Vert.x test itself does not fail, it's only when > closing > > nodes that we have issues. > > > > Here's our XML config: > > > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml > > > > > Does that ring a bell? Do you need more info? > > > > Regards, > > Thomas > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From tsegismont at gmail.com Mon Mar 26 07:41:47 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Mon, 26 Mar 2018 13:41:47 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> Message-ID: 2018-03-26 13:16 GMT+02:00 Pedro Ruivo : > > > On 23-03-2018 15:06, Thomas SEGISMONT wrote: > > Hi Pedro, > > > > 2018-03-23 13:25 GMT+01:00 Pedro Ruivo > >: > > > > Hi Thomas, > > > > Is the test in question using any counter/lock? > > > > > > I have seen the problem on a test for counters, on another one for > > locks, as well as well as caches only. > > But Vert.x starts the ClusteredLockManager and the CounterManager in all > > cases (even if no lock/counter is created/used) > > > > > > I did see similar behavior with the counter's in our server test > suite. > > The partition handling makes the cache degraded because nodes are > > starting and stopping concurrently. > > > > > > As for me I was able to observe the problem even when stopping nodes one > > after the other and waiting for cluster to go back to HEALTHY status. > > Is it possible that the status of the counter and lock caches are not > > taken into account in cluster health? > > The counter and lock caches are private. So, they aren't in the cluster > health neither their name are returned by getCacheNames() method. > Thanks for the details. I'm not concerned with these internal caches not being listed when calling getCacheNames. However, the cluster health status should include their status as well. Cluster status testing is the recommended way to implement readiness checks on Kubernetes for example. What do you think Sebastian? > > > > > > > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you > know? > > If there is none, it should be created. > > > > I improved the counters by making the cache start lazily when you > first > > get or define a counter [1]. This workaround solved the issue for us. > > > > As a workaround for your test suite, I suggest to make sure the > caches > > (___counter_configuration and org.infinispan.LOCK) have finished > their > > state transfer before stopping the cache managers, by invoking > > DefaultCacheManager.getCache(*cache-name*) in all the caches > managers. > > > > Sorry for the inconvenience and the delay in replying. > > > > > > No problem. > > > > > > Cheers, > > Pedro > > > > [1] https://issues.jboss.org/browse/ISPN-8860 > > > > > > On 21-03-2018 16:16, Thomas SEGISMONT wrote: > > > Hi everyone, > > > > > > I am working on integrating Infinispan 9.2.Final in > vertx-infinispan. > > > Before merging I wanted to make sure the test suite passed but it > > > doesn't. It's not the always the same test involved. > > > > > > In the logs, I see a lot of messages like "After merge (or > > coordinator > > > change), cache still hasn't recovered a majority of members and > must > > > stay in degraded mode. > > > The context involved are "___counter_configuration" and > > > "org.infinispan.LOCKS" > > > > > > Most often it's harmless but, sometimes, I also see this exception > > > "ISPN000210: Failed to request state of cache" > > > Again the cache involved is either "___counter_configuration" or > > > "org.infinispan.LOCKS" > > > After this exception, the cache manager is unable to stop. It > > blocks in > > > method "terminate" (join on cache future). > > > > > > I thought the test suite was too rough (we stop all nodes at the > same > > > time). So I changed it to make sure that: > > > - nodes start one after the other > > > - a new node is started only when the previous one indicates > > HEALTHY status > > > - nodes stop one after the other > > > - a node is stopped only when it indicates HEALTHY status > > > Pretty much what we do on Kubernetes for the readiness check > > actually. > > > But it didn't get any better. > > > > > > Attached are the logs of such a failing test. > > > > > > Note that the Vert.x test itself does not fail, it's only when > > closing > > > nodes that we have issues. > > > > > > Here's our XML config: > > > > > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/ > src/main/resources/default-infinispan.xml > > src/main/resources/default-infinispan.xml> > > > > > > Does that ring a bell? Do you need more info? > > > > > > Regards, > > > Thomas > > > > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180326/1cb5559e/attachment.html From slaskawi at redhat.com Tue Mar 27 04:03:08 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 27 Mar 2018 08:03:08 +0000 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> Message-ID: At the moment, the cluster health status checker enumerates all caches in the cache manager [1] and checks whether those cashes are running and not in degraded more [2]. I'm not sure how counter caches have been implemented. One thing is for sure - they should be taken into account in this loop [3]. [1] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 [2] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 [3] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 On Mon, Mar 26, 2018 at 1:59 PM Thomas SEGISMONT wrote: > 2018-03-26 13:16 GMT+02:00 Pedro Ruivo : > >> >> >> On 23-03-2018 15:06, Thomas SEGISMONT wrote: >> > Hi Pedro, >> > >> > 2018-03-23 13:25 GMT+01:00 Pedro Ruivo > > >: >> > >> > Hi Thomas, >> > >> > Is the test in question using any counter/lock? >> > >> > >> > I have seen the problem on a test for counters, on another one for >> > locks, as well as well as caches only. >> > But Vert.x starts the ClusteredLockManager and the CounterManager in all >> > cases (even if no lock/counter is created/used) >> > >> > >> > I did see similar behavior with the counter's in our server test >> suite. >> > The partition handling makes the cache degraded because nodes are >> > starting and stopping concurrently. >> > >> > >> > As for me I was able to observe the problem even when stopping nodes one >> > after the other and waiting for cluster to go back to HEALTHY status. >> > Is it possible that the status of the counter and lock caches are not >> > taken into account in cluster health? >> >> The counter and lock caches are private. So, they aren't in the cluster >> health neither their name are returned by getCacheNames() method. >> > > Thanks for the details. > > I'm not concerned with these internal caches not being listed when calling > getCacheNames. > > However, the cluster health status should include their status as well. > Cluster status testing is the recommended way to implement readiness > checks on Kubernetes for example. > > What do you think Sebastian? > > >> >> > >> > >> > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you >> know? >> > If there is none, it should be created. >> > >> > I improved the counters by making the cache start lazily when you >> first >> > get or define a counter [1]. This workaround solved the issue for >> us. >> > >> > As a workaround for your test suite, I suggest to make sure the >> caches >> > (___counter_configuration and org.infinispan.LOCK) have finished >> their >> > state transfer before stopping the cache managers, by invoking >> > DefaultCacheManager.getCache(*cache-name*) in all the caches >> managers. >> > >> > Sorry for the inconvenience and the delay in replying. >> > >> > >> > No problem. >> > >> > >> > Cheers, >> > Pedro >> > >> > [1] https://issues.jboss.org/browse/ISPN-8860 >> > >> > >> > On 21-03-2018 16:16, Thomas SEGISMONT wrote: >> > > Hi everyone, >> > > >> > > I am working on integrating Infinispan 9.2.Final in >> vertx-infinispan. >> > > Before merging I wanted to make sure the test suite passed but it >> > > doesn't. It's not the always the same test involved. >> > > >> > > In the logs, I see a lot of messages like "After merge (or >> > coordinator >> > > change), cache still hasn't recovered a majority of members and >> must >> > > stay in degraded mode. >> > > The context involved are "___counter_configuration" and >> > > "org.infinispan.LOCKS" >> > > >> > > Most often it's harmless but, sometimes, I also see this >> exception >> > > "ISPN000210: Failed to request state of cache" >> > > Again the cache involved is either "___counter_configuration" or >> > > "org.infinispan.LOCKS" >> > > After this exception, the cache manager is unable to stop. It >> > blocks in >> > > method "terminate" (join on cache future). >> > > >> > > I thought the test suite was too rough (we stop all nodes at the >> same >> > > time). So I changed it to make sure that: >> > > - nodes start one after the other >> > > - a new node is started only when the previous one indicates >> > HEALTHY status >> > > - nodes stop one after the other >> > > - a node is stopped only when it indicates HEALTHY status >> > > Pretty much what we do on Kubernetes for the readiness check >> > actually. >> > > But it didn't get any better. >> > > >> > > Attached are the logs of such a failing test. >> > > >> > > Note that the Vert.x test itself does not fail, it's only when >> > closing >> > > nodes that we have issues. >> > > >> > > Here's our XML config: >> > > >> > >> https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml >> > < >> https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml >> > >> > > >> > > Does that ring a bell? Do you need more info? >> > > >> > > Regards, >> > > Thomas >> > > >> > > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org > infinispan-dev at lists.jboss.org> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180327/2bd892a6/attachment-0001.html From tsegismont at gmail.com Tue Mar 27 04:16:44 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Tue, 27 Mar 2018 10:16:44 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> Message-ID: Thanks Sebastian. Is there a JIRA for this already? 2018-03-27 10:03 GMT+02:00 Sebastian Laskawiec : > At the moment, the cluster health status checker enumerates all caches in > the cache manager [1] and checks whether those cashes are running and not > in degraded more [2]. > > I'm not sure how counter caches have been implemented. One thing is for > sure - they should be taken into account in this loop [3]. > > [1] https://github.com/infinispan/infinispan/blob/ > master/core/src/main/java/org/infinispan/health/impl/ > ClusterHealthImpl.java#L22 > [2] https://github.com/infinispan/infinispan/blob/ > master/core/src/main/java/org/infinispan/health/impl/ > CacheHealthImpl.java#L25 > [3] https://github.com/infinispan/infinispan/blob/ > master/core/src/main/java/org/infinispan/health/impl/ > ClusterHealthImpl.java#L23-L24 > > On Mon, Mar 26, 2018 at 1:59 PM Thomas SEGISMONT > wrote: > >> 2018-03-26 13:16 GMT+02:00 Pedro Ruivo : >> >>> >>> >>> On 23-03-2018 15:06, Thomas SEGISMONT wrote: >>> > Hi Pedro, >>> > >>> > 2018-03-23 13:25 GMT+01:00 Pedro Ruivo >> > >: >>> > >>> > Hi Thomas, >>> > >>> > Is the test in question using any counter/lock? >>> > >>> > >>> > I have seen the problem on a test for counters, on another one for >>> > locks, as well as well as caches only. >>> > But Vert.x starts the ClusteredLockManager and the CounterManager in >>> all >>> > cases (even if no lock/counter is created/used) >>> > >>> > >>> > I did see similar behavior with the counter's in our server test >>> suite. >>> > The partition handling makes the cache degraded because nodes are >>> > starting and stopping concurrently. >>> > >>> > >>> > As for me I was able to observe the problem even when stopping nodes >>> one >>> > after the other and waiting for cluster to go back to HEALTHY status. >>> > Is it possible that the status of the counter and lock caches are not >>> > taken into account in cluster health? >>> >>> The counter and lock caches are private. So, they aren't in the cluster >>> health neither their name are returned by getCacheNames() method. >>> >> >> Thanks for the details. >> >> I'm not concerned with these internal caches not being listed when >> calling getCacheNames. >> >> However, the cluster health status should include their status as well. >> Cluster status testing is the recommended way to implement readiness >> checks on Kubernetes for example. >> >> What do you think Sebastian? >> >> >>> >>> > >>> > >>> > I'm not sure if there are any JIRA to tracking. Ryan, Dan do you >>> know? >>> > If there is none, it should be created. >>> > >>> > I improved the counters by making the cache start lazily when you >>> first >>> > get or define a counter [1]. This workaround solved the issue for >>> us. >>> > >>> > As a workaround for your test suite, I suggest to make sure the >>> caches >>> > (___counter_configuration and org.infinispan.LOCK) have finished >>> their >>> > state transfer before stopping the cache managers, by invoking >>> > DefaultCacheManager.getCache(*cache-name*) in all the caches >>> managers. >>> > >>> > Sorry for the inconvenience and the delay in replying. >>> > >>> > >>> > No problem. >>> > >>> > >>> > Cheers, >>> > Pedro >>> > >>> > [1] https://issues.jboss.org/browse/ISPN-8860 >>> > >>> > >>> > On 21-03-2018 16:16, Thomas SEGISMONT wrote: >>> > > Hi everyone, >>> > > >>> > > I am working on integrating Infinispan 9.2.Final in >>> vertx-infinispan. >>> > > Before merging I wanted to make sure the test suite passed but >>> it >>> > > doesn't. It's not the always the same test involved. >>> > > >>> > > In the logs, I see a lot of messages like "After merge (or >>> > coordinator >>> > > change), cache still hasn't recovered a majority of members and >>> must >>> > > stay in degraded mode. >>> > > The context involved are "___counter_configuration" and >>> > > "org.infinispan.LOCKS" >>> > > >>> > > Most often it's harmless but, sometimes, I also see this >>> exception >>> > > "ISPN000210: Failed to request state of cache" >>> > > Again the cache involved is either "___counter_configuration" or >>> > > "org.infinispan.LOCKS" >>> > > After this exception, the cache manager is unable to stop. It >>> > blocks in >>> > > method "terminate" (join on cache future). >>> > > >>> > > I thought the test suite was too rough (we stop all nodes at >>> the same >>> > > time). So I changed it to make sure that: >>> > > - nodes start one after the other >>> > > - a new node is started only when the previous one indicates >>> > HEALTHY status >>> > > - nodes stop one after the other >>> > > - a node is stopped only when it indicates HEALTHY status >>> > > Pretty much what we do on Kubernetes for the readiness check >>> > actually. >>> > > But it didn't get any better. >>> > > >>> > > Attached are the logs of such a failing test. >>> > > >>> > > Note that the Vert.x test itself does not fail, it's only when >>> > closing >>> > > nodes that we have issues. >>> > > >>> > > Here's our XML config: >>> > > >>> > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/ >>> src/main/resources/default-infinispan.xml >>> > >> src/main/resources/default-infinispan.xml> >>> > > >>> > > Does that ring a bell? Do you need more info? >>> > > >>> > > Regards, >>> > > Thomas >>> > > >>> > > >>> > > >>> > > _______________________________________________ >>> > > infinispan-dev mailing list >>> > > infinispan-dev at lists.jboss.org >>> > >>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >> jboss.org> >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180327/82abbd65/attachment.html From pedro at infinispan.org Tue Mar 27 05:08:21 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Tue, 27 Mar 2018 10:08:21 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> Message-ID: <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> On 27-03-2018 09:03, Sebastian Laskawiec wrote: > At the moment, the cluster health status checker enumerates all caches > in the cache manager [1] and checks whether those cashes are running and > not in degraded more [2]. > > I'm not sure how counter caches have been implemented. One thing is for > sure - they should be taken into account in this loop [3]. The private caches aren't listed by CacheManager.getCacheNames(). We have to check them via InternalCacheRegistry.getInternalCacheNames(). I'll open a JIRA if you don't mind :) > > [1] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 > [2] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 > [3] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 > > On Mon, Mar 26, 2018 at 1:59 PM Thomas SEGISMONT > wrote: > > 2018-03-26 13:16 GMT+02:00 Pedro Ruivo >: > > > > On 23-03-2018 15:06, Thomas SEGISMONT wrote: > > Hi Pedro, > > > > 2018-03-23 13:25 GMT+01:00 Pedro Ruivo > > >>: > > > >? ? ?Hi Thomas, > > > >? ? ?Is the test in question using any counter/lock? > > > > > > I have seen the problem on a test for counters, on another one for > > locks, as well as well as caches only. > > But Vert.x starts the ClusteredLockManager and the CounterManager in all > > cases (even if no lock/counter is created/used) > > > > > >? ? ?I did see similar behavior with the counter's in our server test suite. > >? ? ?The partition handling makes the cache degraded because nodes are > >? ? ?starting and stopping concurrently. > > > > > > As for me I was able to observe the problem even when stopping nodes one > > after the other and waiting for cluster to go back to HEALTHY status. > > Is it possible that the status of the counter and lock caches are not > > taken into account in cluster health? > > The counter and lock caches are private. So, they aren't in the > cluster > health neither their name are returned by getCacheNames() method. > > > Thanks for the details. > > I'm not concerned with these internal caches not being listed when > calling getCacheNames. > > However, the cluster health status should include their status as well. > Cluster status testing is the recommended way to implement readiness > checks on Kubernetes for example. > > What do you think Sebastian? > > > > > > > >? ? ?I'm not sure if there are any JIRA to tracking. Ryan, Dan > do you know? > >? ? ?If there is none, it should be created. > > > >? ? ?I improved the counters by making the cache start lazily > when you first > >? ? ?get or define a counter [1]. This workaround solved the > issue for us. > > > >? ? ?As a workaround for your test suite, I suggest to make > sure the caches > >? ? ?(___counter_configuration and org.infinispan.LOCK) have > finished their > >? ? ?state transfer before stopping the cache managers, by > invoking > >? ? ?DefaultCacheManager.getCache(*cache-name*) in all the > caches managers. > > > >? ? ?Sorry for the inconvenience and the delay in replying. > > > > > > No problem. > > > > > >? ? ?Cheers, > >? ? ?Pedro > > > >? ? ?[1] https://issues.jboss.org/browse/ISPN-8860 > >? ? ? > > > >? ? ?On 21-03-2018 16:16, Thomas SEGISMONT wrote: > >? ? ? > Hi everyone, > >? ? ? > > >? ? ? > I am working on integrating Infinispan 9.2.Final in > vertx-infinispan. > >? ? ? > Before merging I wanted to make sure the test suite > passed but it > >? ? ? > doesn't. It's not the always the same test involved. > >? ? ? > > >? ? ? > In the logs, I see a lot of messages like "After merge (or > >? ? ?coordinator > >? ? ? > change), cache still hasn't recovered a majority of > members and must > >? ? ? > stay in degraded mode. > >? ? ? > The context involved are "___counter_configuration" and > >? ? ? > "org.infinispan.LOCKS" > >? ? ? > > >? ? ? > Most often it's harmless but, sometimes, I also see > this exception > >? ? ? > "ISPN000210: Failed to request state of cache" > >? ? ? > Again the cache involved is either > "___counter_configuration" or > >? ? ? > "org.infinispan.LOCKS" > >? ? ? > After this exception, the cache manager is unable to > stop. It > >? ? ?blocks in > >? ? ? > method "terminate" (join on cache future). > >? ? ? > > >? ? ? > I thought the test suite was too rough (we stop all > nodes at the same > >? ? ? > time). So I changed it to make sure that: > >? ? ? > - nodes start one after the other > >? ? ? > - a new node is started only when the previous one > indicates > >? ? ?HEALTHY status > >? ? ? > - nodes stop one after the other > >? ? ? > - a node is stopped only when it indicates HEALTHY status > >? ? ? > Pretty much what we do on Kubernetes for the readiness > check > >? ? ?actually. > >? ? ? > But it didn't get any better. > >? ? ? > > >? ? ? > Attached are the logs of such a failing test. > >? ? ? > > >? ? ? > Note that the Vert.x test itself does not fail, it's > only when > >? ? ?closing > >? ? ? > nodes that we have issues. > >? ? ? > > >? ? ? > Here's our XML config: > >? ? ? > > > > https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml > > > ? > >? ? ? > > >? ? ? > Does that ring a bell? Do you need more info? > >? ? ? > > >? ? ? > Regards, > >? ? ? > Thomas > >? ? ? > > >? ? ? > > >? ? ? > > >? ? ? > _______________________________________________ > >? ? ? > infinispan-dev mailing list > >? ? ? > infinispan-dev at lists.jboss.org > > >? ? ? > > >? ? ? > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >? ? ? > >? ? ? > > >? ? ?_______________________________________________ > >? ? ?infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >? ? ? > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From pedro at infinispan.org Tue Mar 27 05:15:47 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Tue, 27 Mar 2018 10:15:47 +0100 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> Message-ID: <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> JIRA: https://issues.jboss.org/browse/ISPN-8994 On 27-03-2018 10:08, Pedro Ruivo wrote: > > > On 27-03-2018 09:03, Sebastian Laskawiec wrote: >> At the moment, the cluster health status checker enumerates all caches >> in the cache manager [1] and checks whether those cashes are running >> and not in degraded more [2]. >> >> I'm not sure how counter caches have been implemented. One thing is >> for sure - they should be taken into account in this loop [3]. > > The private caches aren't listed by CacheManager.getCacheNames(). We > have to check them via InternalCacheRegistry.getInternalCacheNames(). > > I'll open a JIRA if you don't mind :) > >> >> [1] >> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 >> >> [2] >> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 >> >> [3] >> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 From sergey.chernolyas at gmail.com Thu Mar 29 05:51:46 2018 From: sergey.chernolyas at gmail.com (Sergey Chernolyas) Date: Thu, 29 Mar 2018 12:51:46 +0300 Subject: [infinispan-dev] Problem with equal configuration of Cassandra for two caches Message-ID: Hi! I faced with problem then I have two caches that uses Cassandra Store. Each store has own configuration. But ... They uses one configuration of last loaded cache. -- --------------------- With best regards, Sergey Chernolyas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180329/bd6e065a/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: InfinispanCassandra.zip Type: application/x-zip-compressed Size: 14258 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180329/bd6e065a/attachment.bin From slaskawi at redhat.com Fri Mar 30 07:42:31 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 30 Mar 2018 11:42:31 +0000 Subject: [infinispan-dev] Problem with equal configuration of Cassandra for two caches In-Reply-To: References: Message-ID: Hey! I investigated this issue and indeed there is a problem there. See https://issues.jboss.org/browse/ISPN-9027 for more info. There is a workaround for it but you're not going to like it. You'd need to copy CassandraStore from the Cassandra Cache Store into CassandraStore2 and rebuild the archive. Then, one of the Caches needs to use CassandraStore with proper configuration and the second one should use CassandraStore2. Thanks, Sebastian On Thu, Mar 29, 2018 at 11:53 AM Sergey Chernolyas < sergey.chernolyas at gmail.com> wrote: > Hi! > > I faced with problem then I have two caches that uses Cassandra Store. > Each store has own configuration. But ... They uses one configuration of > last loaded cache. > > > -- > --------------------- > > With best regards, Sergey Chernolyas > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180330/d2afbdab/attachment-0001.html