From sergey.chernolyas at gmail.com Sun Apr 1 23:46:07 2018 From: sergey.chernolyas at gmail.com (Sergey Chernolyas) Date: Mon, 2 Apr 2018 06:46:07 +0300 Subject: [infinispan-dev] CLI hangs for huge cache if RocksDB is used Message-ID: Hi! I am using RocksDB Cache Storage. I faced with problem that CLI/Web hangs long time before open information about all caches. I uploaded to one cache 30_000_000 objects. Last versions of RocksDB has property 'rocksdb.estimate-num-keys'. The property contains count of keys. I supported the property in method RocksDBCacheStore.size . But ... performance of CLI/Web changes a little. How I can fix a problem with CLI/Web performance ? A lot of thanks! -- --------------------- With best regards, Sergey Chernolyas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180402/966c8002/attachment.html From galder at redhat.com Mon Apr 2 13:36:44 2018 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 02 Apr 2018 17:36:44 +0000 Subject: [infinispan-dev] CLI hangs for huge cache if RocksDB is used In-Reply-To: References: Message-ID: Infinispan version? Thread dumps? Best if you open a user forum post here: https://developer.jboss.org/en/infinispan/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D Cheers, On Mon, Apr 2, 2018 at 5:47 AM Sergey Chernolyas < sergey.chernolyas at gmail.com> wrote: > Hi! > > I am using RocksDB Cache Storage. I faced with problem that CLI/Web hangs > long time before open information about all caches. > I uploaded to one cache 30_000_000 objects. Last versions of RocksDB has > property 'rocksdb.estimate-num-keys'. The property contains count of keys. > I supported the property in method RocksDBCacheStore.size . > But ... performance of CLI/Web changes a little. > > How I can fix a problem with CLI/Web performance ? > > A lot of thanks! > > -- > --------------------- > > With best regards, Sergey Chernolyas > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180402/dad07dc9/attachment.html From ttarrant at redhat.com Mon Apr 2 13:49:20 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 02 Apr 2018 17:49:20 +0000 Subject: [infinispan-dev] CLI hangs for huge cache if RocksDB is used In-Reply-To: References: Message-ID: I think it makes sense to discuss this here as Will has been busy working on cache store iteration performance, and I'm sure he's interested in the rocksdb specific optimizations. -- Tristan Tarrant Infinispan Lead & Data Grid Architect Red Hat On Mon, 2 Apr 2018, 19:37 Galder Zamarreno, wrote: > Infinispan version? Thread dumps? > > Best if you open a user forum post here: > > https://developer.jboss.org/en/infinispan/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D > > Cheers, > > On Mon, Apr 2, 2018 at 5:47 AM Sergey Chernolyas < > sergey.chernolyas at gmail.com> wrote: > >> Hi! >> >> I am using RocksDB Cache Storage. I faced with problem that CLI/Web hangs >> long time before open information about all caches. >> I uploaded to one cache 30_000_000 objects. Last versions of RocksDB has >> property 'rocksdb.estimate-num-keys'. The property contains count of keys. >> I supported the property in method RocksDBCacheStore.size . >> But ... performance of CLI/Web changes a little. >> >> How I can fix a problem with CLI/Web performance ? >> >> A lot of thanks! >> >> -- >> --------------------- >> >> With best regards, Sergey Chernolyas >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180402/5c91e0ff/attachment.html From rvansa at redhat.com Tue Apr 3 05:14:28 2018 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 3 Apr 2018 11:14:28 +0200 Subject: [infinispan-dev] Public cluster discovery service In-Reply-To: References: <26fac1ec-370a-fd57-4157-47227e189643@mailbox.org> Message-ID: <5368538c-f5ac-d57e-a142-0d06eb82d2ee@redhat.com> On 03/09/2018 11:26 AM, Sebastian Laskawiec wrote: > > > On Thu, Mar 8, 2018 at 11:47 AM Bela Ban > wrote: > > > > On 08/03/18 10:49, Sebastian Laskawiec wrote: > > Hey Bela, > > > > I've just stumbled upon this: > > https://coreos.com/os/docs/latest/cluster-discovery.html > > > > The Etcd folks created a public discovery service. You need to use a > > token and get a discovery string back. I believe that's super, super > > useful for demos across multiple public clouds. > > > Why? This is conceptually the same as running a GossipRouter on a > public, DNS-mapped, IP address... > > > The real challenge with cross-cloud clusters is (as you and I > discovered) to bridge the non-public addresses of local cloud members > with members running in different clouds. > > > I totally agree with you here. It's pretty bad that there is no way > for the Pod to learn what is the external Load Balancer address that > exposes it. > > The only way I can see to fix this is to write a very small > application which will do this mapping. Then the app should use > PodInjectionPolicy [1] (or a similar Admission Controller [2]) > > So back to the publicly available GossipRouter - I still believe there > is a potential in this solution and we should create a small tutorial > telling users how to do it (maybe a template for OpenShift?). But > granted - Admission Controller work (the mapper I mentioned the above) > is by far more important. > > [1] https://kubernetes.io/docs/tasks/inject-data-application/podpreset/ > [2] https://kubernetes.io/docs/admin/admission-controllers/ I think that the question of mapping to public IPs is almost orthogonal to the existence of the service. Nodes should publish any address/data they want, the IPs may be relevant only within the internal network. The purpose as I see it is to get cluster going ASAP. Even without the need of turning the GossipRouter on. > > Unless you make all members use public IP addresses, but that's not > something that's typically advised in a cloud env. > > > > What do you think about that? Perhaps we could implement an > ETCD_PING > > and just reuse their service or write our own? > > Sure, should be simple. But - again - what's the goal? If > discovery.etcd.io can be used as a > public *permanent* discovery service, > yes, cool > > > You convinced me - GossipRouter is the right way to go here. I'd personally prefer a HTTP-based service with some JSONs - it's easy to inspect and see what it does, therefore I'd trust it a bit more. Also it's unlikely to block HTTP communication from any node. Also it's easy to debug which node has connected and which has not - simply peek on the JSON list. I wouldn't parasite on etcd's servers, rather spawn our discovery.infinispan.org. Besides looking better, we could also get some interesting data (what sizes of cluster are people using, how often are they restart the servers...). Radim > > > Thanks, > > Seb > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Fri Apr 6 11:15:16 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 6 Apr 2018 16:15:16 +0100 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template Message-ID: Hi all, I've started to use the Infinispan Openshift Template and was browsing through the errors and warnings this produces. In particular I noticed "WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique." being produced by the transaction system. The node id is usually not needed for developer's convenience and assuming there's a single node in "dev mode", yet clearly the Infinispan template is meant to work with multiple nodes running so this warning seems concerning. I'm not sure what the impact is on the transaction manager so I asked on the Narayana forums; Tom pointed me to some thourough design documents and also suggested the EAP image does set the node identifier: - https://developer.jboss.org/message/981702#981702 WDYT? we probably want the Infinispan template to set this as well, or silence the warning? Thanks, Sanne From slaskawi at redhat.com Mon Apr 9 04:26:42 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 09 Apr 2018 08:26:42 +0000 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Thanks for looking into it Sanne. Of course, we should add it (it can be set to the same name as hostname since those are unique in Kubernetes). Created https://issues.jboss.org/browse/ISPN-9051 for it. Thanks again! Seb On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero wrote: > Hi all, > > I've started to use the Infinispan Openshift Template and was browsing > through the errors and warnings this produces. > > In particular I noticed "WFLYTX0013: Node identifier property is set > to the default value. Please make sure it is unique." being produced > by the transaction system. > > The node id is usually not needed for developer's convenience and > assuming there's a single node in "dev mode", yet clearly the > Infinispan template is meant to work with multiple nodes running so > this warning seems concerning. > > I'm not sure what the impact is on the transaction manager so I asked > on the Narayana forums; Tom pointed me to some thourough design > documents and also suggested the EAP image does set the node > identifier: > - https://developer.jboss.org/message/981702#981702 > > WDYT? we probably want the Infinispan template to set this as well, or > silence the warning? > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180409/cce414ca/attachment-0001.html From sanne at infinispan.org Mon Apr 9 04:37:43 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 9 Apr 2018 09:37:43 +0100 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: On 9 April 2018 at 09:26, Sebastian Laskawiec wrote: > Thanks for looking into it Sanne. Of course, we should add it (it can be set > to the same name as hostname since those are unique in Kubernetes). > > Created https://issues.jboss.org/browse/ISPN-9051 for it. > > Thanks again! > Seb Thanks Sebastian! > > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero wrote: >> >> Hi all, >> >> I've started to use the Infinispan Openshift Template and was browsing >> through the errors and warnings this produces. >> >> In particular I noticed "WFLYTX0013: Node identifier property is set >> to the default value. Please make sure it is unique." being produced >> by the transaction system. >> >> The node id is usually not needed for developer's convenience and >> assuming there's a single node in "dev mode", yet clearly the >> Infinispan template is meant to work with multiple nodes running so >> this warning seems concerning. >> >> I'm not sure what the impact is on the transaction manager so I asked >> on the Narayana forums; Tom pointed me to some thourough design >> documents and also suggested the EAP image does set the node >> identifier: >> - https://developer.jboss.org/message/981702#981702 >> >> WDYT? we probably want the Infinispan template to set this as well, or >> silence the warning? >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Apr 9 10:40:51 2018 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 09 Apr 2018 14:40:51 +0000 Subject: [infinispan-dev] Weekly meeting minutes Message-ID: Hi, Please find minutes from our weekly meeting here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-04-09-14.00.html Cheers, Galder -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180409/b9b125ca/attachment.html From slaskawi at redhat.com Wed Apr 11 08:31:51 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 11 Apr 2018 12:31:51 +0000 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Hey Rado, Paul, I started looking into this issue and it turned out that WF subsystem template doesn't provide `node-identifier` attribute [1]. I'm not sure if you guys are the right people to ask, but is it safe to leave it set to default? Or shall I override our Infinispan templates and add this parameter (as I mentioned before, in OpenShift this I wanted to set it as Pod name trimmed to the last 23 chars since this is the limit). Thanks, Seb [1] usually set to node-identifier="${jboss.node.name}" On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero wrote: > On 9 April 2018 at 09:26, Sebastian Laskawiec wrote: > > Thanks for looking into it Sanne. Of course, we should add it (it can be > set > > to the same name as hostname since those are unique in Kubernetes). > > > > Created https://issues.jboss.org/browse/ISPN-9051 for it. > > > > Thanks again! > > Seb > > Thanks Sebastian! > > > > > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero > wrote: > >> > >> Hi all, > >> > >> I've started to use the Infinispan Openshift Template and was browsing > >> through the errors and warnings this produces. > >> > >> In particular I noticed "WFLYTX0013: Node identifier property is set > >> to the default value. Please make sure it is unique." being produced > >> by the transaction system. > >> > >> The node id is usually not needed for developer's convenience and > >> assuming there's a single node in "dev mode", yet clearly the > >> Infinispan template is meant to work with multiple nodes running so > >> this warning seems concerning. > >> > >> I'm not sure what the impact is on the transaction manager so I asked > >> on the Narayana forums; Tom pointed me to some thourough design > >> documents and also suggested the EAP image does set the node > >> identifier: > >> - https://developer.jboss.org/message/981702#981702 > >> > >> WDYT? we probably want the Infinispan template to set this as well, or > >> silence the warning? > >> > >> Thanks, > >> Sanne > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180411/84991ccf/attachment.html From rory.odonnell at oracle.com Thu Apr 12 05:57:57 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Thu, 12 Apr 2018 10:57:57 +0100 Subject: [infinispan-dev] JDK 11 Early Access build 8 available Message-ID: Hi Galder, **JDK 11 EA build 8, *****under both the GPL and Oracle EA licenses, is now available at **http://jdk.java.net/11**. ** * * Newly approved Schedule, status & features o http://openjdk.java.net/projects/jdk/11/ * Release Notes: o http://jdk.java.net/11/release-notes * Summary of changes o https://download.java.net/java/early_access/jdk11/8/jdk-11+8.html *Notable changes in JDK 11 EA builds since last email:* * Build 8: o If you have a library that uses the Selector API heavily then now would be a good time to test it out. [1] * Build 7 o The VM option "-XX:+AggressiveOpts" is deprecated in JDK 11 and will be removed in a future release. * Build 6: o JDK-8193033 : remove terminally deprecated sun.misc.Unsafe.defineClass. Users should use the public replacement `java.lang.invoke.MethodHandles.Lookup.defineClass` which was added in Java SE 9. [2] ** *SURVEY: The HotSpot Serviceability Agent (SA) *[3] * If you have used, or have (support) processes that utilize the Serviceability Agent or related APIs, then we would definitely appreciate if you would complete this survey: https://www.surveymonkey.com/r/CF3MYDL Regards, Rory [1] http://mail.openjdk.java.net/pipermail/nio-dev/2018-April/004964.html [2] https://docs.oracle.com/javase/9/docs/api/java/lang/invoke/MethodHandles.Lookup.html#defineClass-byte:A- [3] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-April/001052.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180412/a7d047bb/attachment.html From galder at redhat.com Thu Apr 12 10:49:57 2018 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 12 Apr 2018 14:49:57 +0000 Subject: [infinispan-dev] Protobuf metadata cache and x-site Message-ID: Hi, We have an issue with protobuf metadata cache. If you run in a multi-site scenario, protobuf metadata information does not travel across sites by default. Being an internal cache, is it possible to somehow override/reconfigure it so that cross-site configuration can be added in standalone.xml? We're currently running a periodic job that checks if the metadata is present and if not present add it. So, we have a workaround for it, but it'd be not very user friendly for end users. Thoughts? Cheers, Galder -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180412/1343bdcf/attachment-0001.html From pedro at infinispan.org Thu Apr 12 11:41:49 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 12 Apr 2018 16:41:49 +0100 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: References: Message-ID: On 12-04-2018 15:49, Galder Zamarreno wrote: > Hi, > > We have an issue with protobuf metadata cache. > > If you run in a multi-site scenario, protobuf metadata information does > not travel across sites by default. > > Being an internal cache, is it possible to somehow override/reconfigure > it so that cross-site configuration can be added in standalone.xml? No :( since it is an internal cache, its configuration can't be changed. > > We're currently running a periodic job that checks if the metadata is > present and if not present add it. So, we have a workaround for it, but > it'd be not very user friendly for end users. > > Thoughts? Unfortunately none... it is the first time an internal cache needs to do some x-site. > > Cheers, > Galder > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Thu Apr 12 12:21:04 2018 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 12 Apr 2018 16:21:04 +0000 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: References: Message-ID: Ok, we do need to find a better way to deal with this. JIRA: https://issues.jboss.org/browse/ISPN-9074 On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo wrote: > > > On 12-04-2018 15:49, Galder Zamarreno wrote: > > Hi, > > > > We have an issue with protobuf metadata cache. > > > > If you run in a multi-site scenario, protobuf metadata information does > > not travel across sites by default. > > > > Being an internal cache, is it possible to somehow override/reconfigure > > it so that cross-site configuration can be added in standalone.xml? > > No :( since it is an internal cache, its configuration can't be changed. > > > > > We're currently running a periodic job that checks if the metadata is > > present and if not present add it. So, we have a workaround for it, but > > it'd be not very user friendly for end users. > > > > Thoughts? > > Unfortunately none... it is the first time an internal cache needs to do > some x-site. > > > > > Cheers, > > Galder > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180412/5fe984ef/attachment.html From pedro at infinispan.org Thu Apr 12 13:01:14 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 12 Apr 2018 18:01:14 +0100 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: References: Message-ID: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> Wouldn't be better to assume the protobuf cache doesn't fit the internal cache use case? :) On 12-04-2018 17:21, Galder Zamarreno wrote: > Ok, we do need to find a better way to deal with this. > > JIRA: https://issues.jboss.org/browse/ISPN-9074 > > On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo > wrote: > > > > On 12-04-2018 15:49, Galder Zamarreno wrote: > > Hi, > > > > We have an issue with protobuf metadata cache. > > > > If you run in a multi-site scenario, protobuf metadata > information does > > not travel across sites by default. > > > > Being an internal cache, is it possible to somehow > override/reconfigure > > it so that cross-site configuration can be added in standalone.xml? > > No :( since it is an internal cache, its configuration can't be changed. > > > > > We're currently running a periodic job that checks if the > metadata is > > present and if not present add it. So, we have a workaround for > it, but > > it'd be not very user friendly for end users. > > > > Thoughts? > > Unfortunately none... it is the first time an internal cache needs > to do > some x-site. > > > > > Cheers, > > Galder > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Thu Apr 12 15:27:20 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 12 Apr 2018 21:27:20 +0200 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> References: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> Message-ID: It is definitely an internal cache. Because of this, automatically backing it up to a remote site might not be such a good idea. Backups are enabled per-cache, and therefore just blindly replicating the schema cache to the other site is not a good idea. I think that we need a cache-manager-level backup setting that does the right thing. Tristan On 4/12/18 7:01 PM, Pedro Ruivo wrote: > Wouldn't be better to assume the protobuf cache doesn't fit the internal > cache use case? :) > > On 12-04-2018 17:21, Galder Zamarreno wrote: >> Ok, we do need to find a better way to deal with this. >> >> JIRA: https://issues.jboss.org/browse/ISPN-9074 >> >> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo > > wrote: >> >> >> >> On 12-04-2018 15:49, Galder Zamarreno wrote: >> > Hi, >> > >> > We have an issue with protobuf metadata cache. >> > >> > If you run in a multi-site scenario, protobuf metadata >> information does >> > not travel across sites by default. >> > >> > Being an internal cache, is it possible to somehow >> override/reconfigure >> > it so that cross-site configuration can be added in standalone.xml? >> >> No :( since it is an internal cache, its configuration can't be changed. >> >> > >> > We're currently running a periodic job that checks if the >> metadata is >> > present and if not present add it. So, we have a workaround for >> it, but >> > it'd be not very user friendly for end users. >> > >> > Thoughts? >> >> Unfortunately none... it is the first time an internal cache needs >> to do >> some x-site. >> >> > >> > Cheers, >> > Galder >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From anistor at redhat.com Thu Apr 12 16:10:17 2018 From: anistor at redhat.com (Adrian Nistor) Date: Thu, 12 Apr 2018 23:10:17 +0300 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: References: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> Message-ID: Backing up caches with protobuf payload to a remote site will not work if they are indexed, unless the remote site already has the schema for the types in question, or else indexing will fail. If the cache is not indexed it matters less. So the replication of protobuf metadata cache has to be arranged somehow before any other data is replicated. Manual replication is indeed PITA. I remember in the very early version of remote query the protobuf metadata cache configuration was created programatically on startup unless a manually defined configuration with that name was found, already provided by the user. In that case the user's config was used. This approach had the benefit of allowing the user to gain control if needed. But can also lead to gloom and doom. Was that too bad to do it again :)))? Adrian On 04/12/2018 10:27 PM, Tristan Tarrant wrote: > It is definitely an internal cache. Because of this, automatically > backing it up to a remote site might not be such a good idea. > > Backups are enabled per-cache, and therefore just blindly replicating > the schema cache to the other site is not a good idea. > > I think that we need a cache-manager-level backup setting that does the > right thing. > > Tristan > > On 4/12/18 7:01 PM, Pedro Ruivo wrote: >> Wouldn't be better to assume the protobuf cache doesn't fit the internal >> cache use case? :) >> >> On 12-04-2018 17:21, Galder Zamarreno wrote: >>> Ok, we do need to find a better way to deal with this. >>> >>> JIRA: https://issues.jboss.org/browse/ISPN-9074 >>> >>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo >> > wrote: >>> >>> >>> >>> On 12-04-2018 15:49, Galder Zamarreno wrote: >>> > Hi, >>> > >>> > We have an issue with protobuf metadata cache. >>> > >>> > If you run in a multi-site scenario, protobuf metadata >>> information does >>> > not travel across sites by default. >>> > >>> > Being an internal cache, is it possible to somehow >>> override/reconfigure >>> > it so that cross-site configuration can be added in standalone.xml? >>> >>> No :( since it is an internal cache, its configuration can't be changed. >>> >>> > >>> > We're currently running a periodic job that checks if the >>> metadata is >>> > present and if not present add it. So, we have a workaround for >>> it, but >>> > it'd be not very user friendly for end users. >>> > >>> > Thoughts? >>> >>> Unfortunately none... it is the first time an internal cache needs >>> to do >>> some x-site. >>> >>> > >>> > Cheers, >>> > Galder >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> From ttarrant at redhat.com Thu Apr 12 16:13:55 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 12 Apr 2018 22:13:55 +0200 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: References: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> Message-ID: <7ce691cc-eae4-f45e-e93c-a7aaa977a4f4@redhat.com> I think we can certainly make it additive, especially now that we have configuration templates in place: the user supplies a base template, and the internal cache logic override with what is needed so that broken configs are less probable (but still possible). Alternatively, instead of overriding, we just check that it matches the requirements. Tristan On 4/12/18 10:10 PM, Adrian Nistor wrote: > Backing up caches with protobuf payload to a remote site will not work > if they are indexed, unless the remote site already has the schema for > the types in question, or else indexing will fail. If the cache is not > indexed it matters less. > > So the replication of protobuf metadata cache has to be arranged somehow > before any other data is replicated. Manual replication is indeed PITA. > > I remember in the very early version of remote query the protobuf > metadata cache configuration was created programatically on startup > unless a manually defined configuration with that name was found, > already provided by the user. In that case the user's config was used. > This approach had the benefit of allowing the user to gain control if > needed. But can also lead to gloom and doom. Was that too bad to do it > again :)))? > > Adrian > > On 04/12/2018 10:27 PM, Tristan Tarrant wrote: >> It is definitely an internal cache. Because of this, automatically >> backing it up to a remote site might not be such a good idea. >> >> Backups are enabled per-cache, and therefore just blindly replicating >> the schema cache to the other site is not a good idea. >> >> I think that we need a cache-manager-level backup setting that does the >> right thing. >> >> Tristan >> >> On 4/12/18 7:01 PM, Pedro Ruivo wrote: >>> Wouldn't be better to assume the protobuf cache doesn't fit the internal >>> cache use case? :) >>> >>> On 12-04-2018 17:21, Galder Zamarreno wrote: >>>> Ok, we do need to find a better way to deal with this. >>>> >>>> JIRA: https://issues.jboss.org/browse/ISPN-9074 >>>> >>>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo >>> > wrote: >>>> >>>> >>>> >>>> ????? On 12-04-2018 15:49, Galder Zamarreno wrote: >>>> ?????? > Hi, >>>> ?????? > >>>> ?????? > We have an issue with protobuf metadata cache. >>>> ?????? > >>>> ?????? > If you run in a multi-site scenario, protobuf metadata >>>> ????? information does >>>> ?????? > not travel across sites by default. >>>> ?????? > >>>> ?????? > Being an internal cache, is it possible to somehow >>>> ????? override/reconfigure >>>> ?????? > it so that cross-site configuration can be added in >>>> standalone.xml? >>>> >>>> ????? No :( since it is an internal cache, its configuration can't >>>> be changed. >>>> >>>> ?????? > >>>> ?????? > We're currently running a periodic job that checks if the >>>> ????? metadata is >>>> ?????? > present and if not present add it. So, we have a workaround >>>> for >>>> ????? it, but >>>> ?????? > it'd be not very user friendly for end users. >>>> ?????? > >>>> ?????? > Thoughts? >>>> >>>> ????? Unfortunately none... it is the first time an internal cache >>>> needs >>>> ????? to do >>>> ????? some x-site. >>>> >>>> ?????? > >>>> ?????? > Cheers, >>>> ?????? > Galder >>>> ?????? > >>>> ?????? > >>>> ?????? > _______________________________________________ >>>> ?????? > infinispan-dev mailing list >>>> ?????? > infinispan-dev at lists.jboss.org >>>> ????? >>>> ?????? > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> ?????? > >>>> ????? _______________________________________________ >>>> ????? infinispan-dev mailing list >>>> ????? infinispan-dev at lists.jboss.org >>>> >>>> ????? https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From ttarrant at redhat.com Thu Apr 12 16:15:41 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 12 Apr 2018 22:15:41 +0200 Subject: [infinispan-dev] Protobuf metadata cache and x-site In-Reply-To: <7ce691cc-eae4-f45e-e93c-a7aaa977a4f4@redhat.com> References: <3d463800-e3d3-da5f-4bff-b606d37f0694@infinispan.org> <7ce691cc-eae4-f45e-e93c-a7aaa977a4f4@redhat.com> Message-ID: <82a8b76f-ac0c-6b38-a2f8-af8661f9693e@redhat.com> We also need: backup priority for internal caches as well as conflict resolution for backups to avoid broken data replicating in the wrong direction. Tristan On 4/12/18 10:13 PM, Tristan Tarrant wrote: > I think we can certainly make it additive, especially now that we have > configuration templates in place: the user supplies a base template, and > the internal cache logic override with what is needed so that broken > configs are less probable (but still possible). Alternatively, instead > of overriding, we just check that it matches the requirements. > > Tristan > > On 4/12/18 10:10 PM, Adrian Nistor wrote: >> Backing up caches with protobuf payload to a remote site will not work >> if they are indexed, unless the remote site already has the schema for >> the types in question, or else indexing will fail. If the cache is not >> indexed it matters less. >> >> So the replication of protobuf metadata cache has to be arranged >> somehow before any other data is replicated. Manual replication is >> indeed PITA. >> >> I remember in the very early version of remote query the protobuf >> metadata cache configuration was created programatically on startup >> unless a manually defined configuration with that name was found, >> already provided by the user. In that case the user's config was used. >> This approach had the benefit of allowing the user to gain control if >> needed. But can also lead to gloom and doom. Was that too bad to do it >> again :)))? >> >> Adrian >> >> On 04/12/2018 10:27 PM, Tristan Tarrant wrote: >>> It is definitely an internal cache. Because of this, automatically >>> backing it up to a remote site might not be such a good idea. >>> >>> Backups are enabled per-cache, and therefore just blindly replicating >>> the schema cache to the other site is not a good idea. >>> >>> I think that we need a cache-manager-level backup setting that does the >>> right thing. >>> >>> Tristan >>> >>> On 4/12/18 7:01 PM, Pedro Ruivo wrote: >>>> Wouldn't be better to assume the protobuf cache doesn't fit the >>>> internal >>>> cache use case? :) >>>> >>>> On 12-04-2018 17:21, Galder Zamarreno wrote: >>>>> Ok, we do need to find a better way to deal with this. >>>>> >>>>> JIRA: https://issues.jboss.org/browse/ISPN-9074 >>>>> >>>>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo >>>> > wrote: >>>>> >>>>> >>>>> >>>>> ????? On 12-04-2018 15:49, Galder Zamarreno wrote: >>>>> ?????? > Hi, >>>>> ?????? > >>>>> ?????? > We have an issue with protobuf metadata cache. >>>>> ?????? > >>>>> ?????? > If you run in a multi-site scenario, protobuf metadata >>>>> ????? information does >>>>> ?????? > not travel across sites by default. >>>>> ?????? > >>>>> ?????? > Being an internal cache, is it possible to somehow >>>>> ????? override/reconfigure >>>>> ?????? > it so that cross-site configuration can be added in >>>>> standalone.xml? >>>>> >>>>> ????? No :( since it is an internal cache, its configuration can't >>>>> be changed. >>>>> >>>>> ?????? > >>>>> ?????? > We're currently running a periodic job that checks if the >>>>> ????? metadata is >>>>> ?????? > present and if not present add it. So, we have a >>>>> workaround for >>>>> ????? it, but >>>>> ?????? > it'd be not very user friendly for end users. >>>>> ?????? > >>>>> ?????? > Thoughts? >>>>> >>>>> ????? Unfortunately none... it is the first time an internal cache >>>>> needs >>>>> ????? to do >>>>> ????? some x-site. >>>>> >>>>> ?????? > >>>>> ?????? > Cheers, >>>>> ?????? > Galder >>>>> ?????? > >>>>> ?????? > >>>>> ?????? > _______________________________________________ >>>>> ?????? > infinispan-dev mailing list >>>>> ?????? > infinispan-dev at lists.jboss.org >>>>> ????? >>>>> ?????? > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> ?????? > >>>>> ????? _______________________________________________ >>>>> ????? infinispan-dev mailing list >>>>> ????? infinispan-dev at lists.jboss.org >>>>> >>>>> ????? https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >> > -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From galder at redhat.com Fri Apr 13 09:21:13 2018 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 13 Apr 2018 13:21:13 +0000 Subject: [infinispan-dev] Passing client listener parameters programmatically Message-ID: Hi, We're working with the OpenWhisk team to create a generic Feed that allows Infinispan remote events to be exposed in an OpenWhisk way. So, you'd pass in Hot Rod endpoint information, name of cache and other details and you'd establish a feed of data from that cache for create/updated/removed data. However, making this generic is tricky when you want to pass in filter/converter factory names since these are defined at the annotation level. Ideally we should have a way to pass in filter/converter factory names programmatically. To avoid limiting ourselves, you could potentially pass in an instance of the annotation in an overloaded method or as optional parameter [1]. Thoughts? Cheers, Galder [1] https://stackoverflow.com/questions/16299717/how-to-create-an-instance-of-an-annotation -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180413/52ff9112/attachment.html From rhusar at redhat.com Fri Apr 13 13:07:09 2018 From: rhusar at redhat.com (Radoslav Husar) Date: Fri, 13 Apr 2018 19:07:09 +0200 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Hi Sebastian, On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec wrote: > Hey Rado, Paul, > > I started looking into this issue and it turned out that WF subsystem > template doesn't provide `node-identifier` attribute [1]. I assume you mean that the default WildFly server profiles do not explicitly define the attribute. Right ? thus the value defaults in the model to "1" https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 which sole intention seems to be to log a warning on boot if the value is unchanged. Why they decided on a constant that will be inherently not unique as opposed to defaulting to the node name (which we already require to be unique) as clustering node name or undertow instance-id does, is unclear to me. Some context is on https://issues.jboss.org/browse/WFLY-1119. > I'm not sure if you guys are the right people to ask, but is it safe to > leave it set to default? Or shall I override our Infinispan templates and > add this parameter (as I mentioned before, in OpenShift this I wanted to set > it as Pod name trimmed to the last 23 chars since this is the limit). It is not safe to leave it set to "1" as that results in inconsistent processing of transaction recovery. IIUC we already set it to the node name for both EAP and JDG https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 which in turn defaults to the pod name ? so which profiles are we talking about here? Rado > Thanks, > Seb > > [1] usually set to node-identifier="${jboss.node.name}" > > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero > wrote: >> >> On 9 April 2018 at 09:26, Sebastian Laskawiec wrote: >> > Thanks for looking into it Sanne. Of course, we should add it (it can be >> > set >> > to the same name as hostname since those are unique in Kubernetes). >> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >> > >> > Thanks again! >> > Seb >> >> Thanks Sebastian! >> >> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >> > wrote: >> >> >> >> Hi all, >> >> >> >> I've started to use the Infinispan Openshift Template and was browsing >> >> through the errors and warnings this produces. >> >> >> >> In particular I noticed "WFLYTX0013: Node identifier property is set >> >> to the default value. Please make sure it is unique." being produced >> >> by the transaction system. >> >> >> >> The node id is usually not needed for developer's convenience and >> >> assuming there's a single node in "dev mode", yet clearly the >> >> Infinispan template is meant to work with multiple nodes running so >> >> this warning seems concerning. >> >> >> >> I'm not sure what the impact is on the transaction manager so I asked >> >> on the Narayana forums; Tom pointed me to some thourough design >> >> documents and also suggested the EAP image does set the node >> >> identifier: >> >> - https://developer.jboss.org/message/981702#981702 >> >> >> >> WDYT? we probably want the Infinispan template to set this as well, or >> >> silence the warning? >> >> >> >> Thanks, >> >> Sanne >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Fri Apr 13 14:57:05 2018 From: mudokonman at gmail.com (William Burns) Date: Fri, 13 Apr 2018 18:57:05 +0000 Subject: [infinispan-dev] Passing client listener parameters programmatically In-Reply-To: References: Message-ID: I personally have never been a fan of the whole annotation thing to configure your listener, unfortunately it just has been this way. If you are just proposing to adding a new addClientListener method that takes those arguments, I don't have a problem with it. void addClientListener(Object listener, String filterFactoryName, Object[] filterFactoryParams, String converterFactoryName, Object[] converterFactoryParams); I would think we would use these values only and ignore any defined on the annotation. Also similar to this but I have some API ideas I would love to explore for ISPN 10 surrounding events and the consumption of them. - Will On Fri, Apr 13, 2018 at 11:12 AM Galder Zamarreno wrote: > Hi, > > We're working with the OpenWhisk team to create a generic Feed that allows > Infinispan remote events to be exposed in an OpenWhisk way. > > So, you'd pass in Hot Rod endpoint information, name of cache and other > details and you'd establish a feed of data from that cache for > create/updated/removed data. > > However, making this generic is tricky when you want to pass in > filter/converter factory names since these are defined at the annotation > level. > > Ideally we should have a way to pass in filter/converter factory names > programmatically. To avoid limiting ourselves, you could potentially pass > in an instance of the annotation in an overloaded method or as optional > parameter [1]. > > Thoughts? > > Cheers, > Galder > > [1] > https://stackoverflow.com/questions/16299717/how-to-create-an-instance-of-an-annotation > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180413/ce0d9f24/attachment.html From dan.berindei at gmail.com Mon Apr 16 03:48:14 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 16 Apr 2018 10:48:14 +0300 Subject: [infinispan-dev] Passing client listener parameters programmatically In-Reply-To: References: Message-ID: +1 to not require annotations, but -100 to ignore the annotations if present, we should throw an exception instead. Dan On Fri, Apr 13, 2018 at 9:57 PM, William Burns wrote: > I personally have never been a fan of the whole annotation thing to > configure your listener, unfortunately it just has been this way. > > If you are just proposing to adding a new addClientListener method that > takes those arguments, I don't have a problem with it. > > void addClientListener(Object listener, String filterFactoryName, Object[] > filterFactoryParams, String converterFactoryName, Object[] > converterFactoryParams); > > I would think we would use these values only and ignore any defined on the > annotation. > > > Also similar to this but I have some API ideas I would love to explore for > ISPN 10 surrounding events and the consumption of them. > > - Will > > On Fri, Apr 13, 2018 at 11:12 AM Galder Zamarreno > wrote: > >> Hi, >> >> We're working with the OpenWhisk team to create a generic Feed that >> allows Infinispan remote events to be exposed in an OpenWhisk way. >> >> So, you'd pass in Hot Rod endpoint information, name of cache and other >> details and you'd establish a feed of data from that cache for >> create/updated/removed data. >> >> However, making this generic is tricky when you want to pass in >> filter/converter factory names since these are defined at the annotation >> level. >> >> Ideally we should have a way to pass in filter/converter factory names >> programmatically. To avoid limiting ourselves, you could potentially pass >> in an instance of the annotation in an overloaded method or as optional >> parameter [1]. >> >> Thoughts? >> >> Cheers, >> Galder >> >> [1] https://stackoverflow.com/questions/16299717/how-to- >> create-an-instance-of-an-annotation >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180416/0603e2fc/attachment-0001.html From anistor at redhat.com Mon Apr 16 04:19:55 2018 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 16 Apr 2018 11:19:55 +0300 Subject: [infinispan-dev] Passing client listener parameters programmatically In-Reply-To: References: Message-ID: <6fb0779b-acd6-821c-9a85-af67f0296a02@redhat.com> +1 for both points. And I absolutely have to add that I never liked the annotation based listeners, both the embedded and the remote ones. On 04/16/2018 10:48 AM, Dan Berindei wrote: > +1 to not require annotations, but -100 to ignore the annotations if > present, we should throw an exception instead. > > Dan > > On Fri, Apr 13, 2018 at 9:57 PM, William Burns > wrote: > > I personally have never been a fan of the whole annotation thing > to configure your listener, unfortunately it just has been this way. > > If you are just proposing to adding a new addClientListener method > that takes those arguments, I don't have a problem with it. > > void addClientListener(Object listener, String filterFactoryName, > Object[] filterFactoryParams, String converterFactoryName, > Object[] converterFactoryParams); > > I would think we would use these values only and ignore any > defined on the annotation. > > > Also similar to this but I have some API ideas I would love to > explore for ISPN 10 surrounding events and the consumption of them. > > ?- Will > > On Fri, Apr 13, 2018 at 11:12 AM Galder Zamarreno > > wrote: > > Hi, > > We're working with the OpenWhisk team to create a generic Feed > that allows Infinispan remote events to be exposed in an > OpenWhisk way. > > So, you'd pass in Hot Rod endpoint information, name of cache > and other details and you'd establish a feed of data from that > cache for create/updated/removed data. > > However, making this generic is tricky when you want to pass > in filter/converter factory names since these are defined at > the annotation level. > > Ideally we should have a way to pass in filter/converter > factory names programmatically. To avoid limiting ourselves, > you could potentially pass in an instance of the annotation in > an overloaded method or as optional parameter [1]. > > Thoughts? > > Cheers, > Galder > > [1] > https://stackoverflow.com/questions/16299717/how-to-create-an-instance-of-an-annotation > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180416/47ab9cf2/attachment.html From slaskawi at redhat.com Mon Apr 16 04:31:46 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 16 Apr 2018 08:31:46 +0000 Subject: [infinispan-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Adding +WildFly Dev to the loop Thanks for the explanation Rado. TL;DR: A while ago Sanne pointed out that we do not set `node-identifier` in transaction subsystem by default. The default value for the `node-identifier` attribute it `1`. Not setting this attribute might cause problems in transaction recovery. Perhaps we could follow Rado's idea and set it to node name by default? Some more comments inlined. Thanks, Sebastian On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar wrote: > Hi Sebastian, > > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec > wrote: > > Hey Rado, Paul, > > > > I started looking into this issue and it turned out that WF subsystem > > template doesn't provide `node-identifier` attribute [1]. > > I assume you mean that the default WildFly server profiles do not > explicitly define the attribute. Right ? thus the value defaults in > the model to "1" > > https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 > which sole intention seems to be to log a warning on boot if the value > is unchanged. > Why they decided on a constant that will be inherently not unique as > opposed to defaulting to the node name (which we already require to be > unique) as clustering node name or undertow instance-id does, is > unclear to me. > Some context is on https://issues.jboss.org/browse/WFLY-1119. > In OpenShift environment we could set it to `hostname`. This is guaranteed to be unique in whole OpenShift cluster. > > > I'm not sure if you guys are the right people to ask, but is it safe to > > leave it set to default? Or shall I override our Infinispan templates and > > add this parameter (as I mentioned before, in OpenShift this I wanted to > set > > it as Pod name trimmed to the last 23 chars since this is the limit). > > It is not safe to leave it set to "1" as that results in inconsistent > processing of transaction recovery. > IIUC we already set it to the node name for both EAP and JDG > > https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 > > https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 > which in turn defaults to the pod name ? so which profiles are we > talking about here? > Granted, we set it by default in CCT Modules. However in Infinispan we just grab provided transaction subsystem when rendering full configuration from featurepacks: https://github.com/infinispan/infinispan/blob/master/server/integration/feature-pack/src/main/resources/configuration/standalone/subsystems-cloud.xml#L19 The default configuration XML doesn't contain the `node-identifier` attribute. I can add it manually in the cloud.xml but I believe the right approach is to modify the transaction subsystem. > Rado > > > Thanks, > > Seb > > > > [1] usually set to node-identifier="${jboss.node.name}" > > > > > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero > > wrote: > >> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec > wrote: > >> > Thanks for looking into it Sanne. Of course, we should add it (it can > be > >> > set > >> > to the same name as hostname since those are unique in Kubernetes). > >> > > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. > >> > > >> > Thanks again! > >> > Seb > >> > >> Thanks Sebastian! > >> > >> > > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero > >> > wrote: > >> >> > >> >> Hi all, > >> >> > >> >> I've started to use the Infinispan Openshift Template and was > browsing > >> >> through the errors and warnings this produces. > >> >> > >> >> In particular I noticed "WFLYTX0013: Node identifier property is set > >> >> to the default value. Please make sure it is unique." being produced > >> >> by the transaction system. > >> >> > >> >> The node id is usually not needed for developer's convenience and > >> >> assuming there's a single node in "dev mode", yet clearly the > >> >> Infinispan template is meant to work with multiple nodes running so > >> >> this warning seems concerning. > >> >> > >> >> I'm not sure what the impact is on the transaction manager so I asked > >> >> on the Narayana forums; Tom pointed me to some thourough design > >> >> documents and also suggested the EAP image does set the node > >> >> identifier: > >> >> - https://developer.jboss.org/message/981702#981702 > >> >> > >> >> WDYT? we probably want the Infinispan template to set this as well, > or > >> >> silence the warning? > >> >> > >> >> Thanks, > >> >> Sanne > >> >> _______________________________________________ > >> >> infinispan-dev mailing list > >> >> infinispan-dev at lists.jboss.org > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > >> > > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > infinispan-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180416/65962cf1/attachment-0001.html From ttarrant at redhat.com Mon Apr 16 10:48:28 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 16 Apr 2018 16:48:28 +0200 Subject: [infinispan-dev] Weekly IRC Meeting logs 2018-04-16 Message-ID: <9b2ce70b-8749-fe08-35af-a742d7644764@redhat.com> Get them here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-04-16-14.00.log.html Tristan From tsegismont at gmail.com Wed Apr 18 05:45:55 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Wed, 18 Apr 2018 11:45:55 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> Message-ID: Hi folks, Sorry I've been busy on other things and couldn't get back to you earlier. I tried running vertx-infinispan test suite with 9.2.1.Final today. There are some problems still but I can't say which ones yet because I hit: https://jira.qos.ch/browse/LOGBACK-1027 We use logback for test logs and all I get is: 2018-04-18 11:37:46,678 [stateTransferExecutor-thread--p4453-t24] ERROR o.i.executors.LimitedExecutor - Exception in task java.lang.StackOverflowError: null at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:54) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) ... so on so forth I will run the suite again without logback and tell you what the actual problem is. Regards, Thomas 2018-03-27 11:15 GMT+02:00 Pedro Ruivo : > JIRA: https://issues.jboss.org/browse/ISPN-8994 > > On 27-03-2018 10:08, Pedro Ruivo wrote: > > > > > > On 27-03-2018 09:03, Sebastian Laskawiec wrote: > >> At the moment, the cluster health status checker enumerates all caches > >> in the cache manager [1] and checks whether those cashes are running > >> and not in degraded more [2]. > >> > >> I'm not sure how counter caches have been implemented. One thing is > >> for sure - they should be taken into account in this loop [3]. > > > > The private caches aren't listed by CacheManager.getCacheNames(). We > > have to check them via InternalCacheRegistry.getInternalCacheNames(). > > > > I'll open a JIRA if you don't mind :) > > > >> > >> [1] > >> https://github.com/infinispan/infinispan/blob/master/core/ > src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 > >> > >> [2] > >> https://github.com/infinispan/infinispan/blob/master/core/ > src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 > >> > >> [3] > >> https://github.com/infinispan/infinispan/blob/master/core/ > src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180418/5172683b/attachment.html From slaskawi at redhat.com Wed Apr 18 09:07:54 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 18 Apr 2018 13:07:54 +0000 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Hey Tom, Comments inlined. Thanks, Sebastian On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson wrote: > > > On 16 April 2018 at 09:31, <> wrote: > >> Adding +WildFly Dev to the loop > > >> >> Thanks for the explanation Rado. >> >> TL;DR: A while ago Sanne pointed out that we do not set `node-identifier` >> in transaction subsystem by default. The default value for the >> `node-identifier` attribute it `1`. Not setting this attribute might cause >> problems in transaction recovery. Perhaps we could follow Rado's idea and >> set it to node name by default? >> > Indeed - it would cause serious data integrity problems if a non-unique > node-identifier is used. > >> Some more comments inlined. >> >> Thanks, >> Sebastian >> >> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >> wrote: >> >> > Hi Sebastian, >> > >> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >> > wrote: >> > > Hey Rado, Paul, >> > > >> > > I started looking into this issue and it turned out that WF subsystem >> > > template doesn't provide `node-identifier` attribute [1]. >> > >> > I assume you mean that the default WildFly server profiles do not >> > > explicitly define the attribute. Right ? thus the value defaults in > > >> > the model to "1" >> > >> > >> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 >> > which sole intention seems to be to log a warning on boot if the value >> > is unchanged. >> > Why they decided on a constant that will be inherently not unique as >> > opposed to defaulting to the node name (which we already require to be >> > unique) as clustering node name or undertow instance-id does, is >> > unclear to me. >> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >> > >> >> In OpenShift environment we could set it to `hostname`. This is guaranteed >> to be unique in whole OpenShift cluster. >> >> >> We do this too in EAP images. > To Rado's point, the default is "1" so we can print the warning to alert > people they are misconfigured - it seems to be working :) > This is the point. From my understanding, if we set it to node name (instead of "1"), we could make it always work correctly. We could even remove the code that emits the warning (since the node name needs to be unique). To sum it up - if we decided to proceed this way, there would be no requirement of setting the node-identifier at all. > > >> > > > >> > > I'm not sure if you guys are the right people to ask, but is it safe >> to >> > > leave it set to default? Or shall I override our Infinispan templates >> and >> > > add this parameter (as I mentioned before, in OpenShift this I wanted >> to >> > set >> > > it as Pod name trimmed to the last 23 chars since this is the limit). >> > Putting a response to this in line - I am not certain who originally > proposed this. > > You must use a globally unique node-identifier. If you are certain the > last 23 characters guarantee that it would be valid - if there is a chance > they are not unique it is not valid to trim. > If that's not an issue, again, we could use the same limit as we have for node name. > > > >> > > > >> > It is not safe to leave it set to "1" as that results in inconsistent >> > processing of transaction recovery. >> > IIUC we already set it to the node name for both EAP and JDG >> > >> > >> https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 >> > >> > >> https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 >> > > which in turn defaults to the pod name ? so which profiles are we > > >> > talking about here? >> > >> >> Granted, we set it by default in CCT Modules. However in Infinispan we >> just >> grab provided transaction subsystem when rendering full configuration from >> featurepacks: >> >> https://github.com/infinispan/infinispan/blob/master/server/integration/feature-pack/src/main/resources/configuration/standalone/subsystems-cloud.xml#L19 >> >> The default configuration XML doesn't contain the `node-identifier` >> attribute. I can add it manually in the cloud.xml but I believe the right >> approach is to modify the transaction subsystem. >> >> >> > Rado >> > >> > > Thanks, >> > > Seb >> > > >> > > [1] usually set to node-identifier="${jboss.node.name}" >> > > >> > > >> > > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero > infinispan.org> >> > > wrote: >> > >> >> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec > redhat.com> > > >> > wrote: >> > >> > Thanks for looking into it Sanne. Of course, we should add it (it >> can >> > be >> > >> > set >> > >> > to the same name as hostname since those are unique in Kubernetes). >> > >> > >> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >> > >> > >> > >> > Thanks again! >> > >> > Seb >> > >> >> > >> Thanks Sebastian! >> > >> >> > >> > >> > > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero > infinispan.org> > > >> > >> > wrote: >> > >> >> >> > >> >> Hi all, >> > >> >> >> > >> >> I've started to use the Infinispan Openshift Template and was >> > browsing >> > >> >> through the errors and warnings this produces. >> > >> >> >> > >> >> In particular I noticed "WFLYTX0013: Node identifier property is >> set >> > >> >> to the default value. Please make sure it is unique." being >> produced >> > >> >> by the transaction system. >> > >> >> >> > >> >> The node id is usually not needed for developer's convenience and >> > >> >> assuming there's a single node in "dev mode", yet clearly the >> > >> >> Infinispan template is meant to work with multiple nodes running >> so >> > >> >> this warning seems concerning. >> > >> >> >> > >> >> I'm not sure what the impact is on the transaction manager so I >> asked >> > >> >> on the Narayana forums; Tom pointed me to some thourough design >> > >> >> documents and also suggested the EAP image does set the node >> > >> >> identifier: >> > >> >> - https://developer.jboss.org/message/981702#981702 >> > >> >> >> > >> >> WDYT? we probably want the Infinispan template to set this as >> well, >> > or >> > >> >> silence the warning? >> > >> >> >> > >> >> Thanks, >> > >> >> Sanne >> > >> >> _______________________________________________ >> > >> >> infinispan-dev mailing list >> > > >> >> infinispan-dev at lists.jboss.org > > >> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > >> > _______________________________________________ >> > >> > infinispan-dev mailing list >> > > >> > infinispan-dev at lists.jboss.org > > >> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> _______________________________________________ >> > >> infinispan-dev mailing list >> > >> infinispan-dev at lists.jboss.org >> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180416/65962cf1/attachment-0001.html >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180418/91e45c6a/attachment-0001.html From tsegismont at gmail.com Wed Apr 18 11:00:06 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Wed, 18 Apr 2018 17:00:06 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> Message-ID: So here's the Circular Referenced Suppressed Exception [stateTransferExecutor-thread--p221-t33] 2018-04-18T16:15:06.662+02:00 WARN [org.infinispan.statetransfer.InboundTransferTask] ISPN000210: Failed to request state of cache __vertx.subs from node sombrero-25286, segments {0} org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node sombrero-25286 was suspected at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33) at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31) at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17) at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23) at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:51) at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.addRequest(JGroupsTransport.java:921) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeCommand(JGroupsTransport.java:815) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeCommand(JGroupsTransport.java:123) at org.infinispan.remoting.rpc.RpcManagerImpl.invokeCommand(RpcManagerImpl.java:138) at org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134) at org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113) at org.infinispan.conflict.impl.StateReceiverImpl$SegmentRequest.lambda$requestState$2(StateReceiverImpl.java:164) at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:101) at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node sombrero-25286 was suspected at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82) at org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:260) ... 10 more [CIRCULAR REFERENCE:org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node sombrero-25286 was suspected] It does not happen with 9.2.0.Final and prevents from using ISPN embedded with logback. Do you want me to file an issue ? 2018-04-18 11:45 GMT+02:00 Thomas SEGISMONT : > Hi folks, > > Sorry I've been busy on other things and couldn't get back to you earlier. > > I tried running vertx-infinispan test suite with 9.2.1.Final today. There > are some problems still but I can't say which ones yet because I hit: > https://jira.qos.ch/browse/LOGBACK-1027 > > We use logback for test logs and all I get is: > > 2018-04-18 11:37:46,678 [stateTransferExecutor-thread--p4453-t24] ERROR > o.i.executors.LimitedExecutor - Exception in task > java.lang.StackOverflowError: null > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:54) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:60) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:72) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:60) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:72) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:60) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:72) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:60) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:72) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:60) > at ch.qos.logback.classic.spi.ThrowableProxy.( > ThrowableProxy.java:72) > ... so on so forth > > I will run the suite again without logback and tell you what the actual > problem is. > > Regards, > Thomas > > 2018-03-27 11:15 GMT+02:00 Pedro Ruivo : > >> JIRA: https://issues.jboss.org/browse/ISPN-8994 >> >> On 27-03-2018 10:08, Pedro Ruivo wrote: >> > >> > >> > On 27-03-2018 09:03, Sebastian Laskawiec wrote: >> >> At the moment, the cluster health status checker enumerates all caches >> >> in the cache manager [1] and checks whether those cashes are running >> >> and not in degraded more [2]. >> >> >> >> I'm not sure how counter caches have been implemented. One thing is >> >> for sure - they should be taken into account in this loop [3]. >> > >> > The private caches aren't listed by CacheManager.getCacheNames(). We >> > have to check them via InternalCacheRegistry.getInternalCacheNames(). >> > >> > I'll open a JIRA if you don't mind :) >> > >> >> >> >> [1] >> >> https://github.com/infinispan/infinispan/blob/master/core/sr >> c/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 >> >> >> >> [2] >> >> https://github.com/infinispan/infinispan/blob/master/core/sr >> c/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 >> >> >> >> [3] >> >> https://github.com/infinispan/infinispan/blob/master/core/sr >> c/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180418/aa143acc/attachment.html From tsegismont at gmail.com Fri Apr 20 09:38:21 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Fri, 20 Apr 2018 15:38:21 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> Message-ID: I tried our test suite on a slower machine (iMac from 2011). It passes consistently there. On my laptop, I keep seeing this from time to time (in different tests): 2018-04-19T19:53:09.513 WARN [Context=org.infinispan.LOCKS]ISPN000320: After merge (or coordinator change), cache still hasn't recovered a majority of members and must stay in degraded mode. Current members are [sombrero-19385], lost members are [sombrero-42917], stable members are [sombrero-42917, sombrero-19385] It happens when we shutdown nodes one after the other (even when waiting for cluster status to be "healthy" plus extra 2 seconds). After that the nodes remains blocked in DefaultCacheManager.stop 2018-04-19T19:49:29.242 AVERTISSEMENT Thread Thread[vert.x-worker-thread-5,5,main] has been blocked for 60774 ms, time limit is 60000 io.vertx.core.VertxException: Thread blocked at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693) at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729) at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934) at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:688) at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:734) at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:711) at io.vertx.ext.cluster.infinispan.InfinispanClusterManager.lambda$leave$5(InfinispanClusterManager.java:285) at io.vertx.ext.cluster.infinispan.InfinispanClusterManager$$Lambda$421/578931659.handle(Unknown Source) at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:265) at io.vertx.core.impl.ContextImpl$$Lambda$27/1330754528.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2018-04-18 17:00 GMT+02:00 Thomas SEGISMONT : > So here's the Circular Referenced Suppressed Exception > > [stateTransferExecutor-thread--p221-t33] 2018-04-18T16:15:06.662+02:00 > WARN [org.infinispan.statetransfer.InboundTransferTask] ISPN000210: > Failed to request state of cache __vertx.subs from node sombrero-25286, > segments {0} > org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: > Node sombrero-25286 was suspected > at org.infinispan.remoting.transport.ResponseCollectors. > remoteNodeSuspected(ResponseCollectors.java:33) > at org.infinispan.remoting.transport.impl.SingleResponseCollector. > targetNotFound(SingleResponseCollector.java:31) > at org.infinispan.remoting.transport.impl.SingleResponseCollector. > targetNotFound(SingleResponseCollector.java:17) > at org.infinispan.remoting.transport.ValidSingleResponseCollector. > addResponse(ValidSingleResponseCollector.java:23) > at org.infinispan.remoting.transport.impl.SingleTargetRequest. > receiveResponse(SingleTargetRequest.java:51) > at org.infinispan.remoting.transport.impl. > SingleTargetRequest.onNewView(SingleTargetRequest.java:42) > at org.infinispan.remoting.transport.jgroups. > JGroupsTransport.addRequest(JGroupsTransport.java:921) > at org.infinispan.remoting.transport.jgroups.JGroupsTransport. > invokeCommand(JGroupsTransport.java:815) > at org.infinispan.remoting.transport.jgroups.JGroupsTransport. > invokeCommand(JGroupsTransport.java:123) > at org.infinispan.remoting.rpc.RpcManagerImpl.invokeCommand( > RpcManagerImpl.java:138) > at org.infinispan.statetransfer.InboundTransferTask.startTransfer( > InboundTransferTask.java:134) > at org.infinispan.statetransfer.InboundTransferTask.requestSegments( > InboundTransferTask.java:113) > at org.infinispan.conflict.impl.StateReceiverImpl$ > SegmentRequest.lambda$requestState$2(StateReceiverImpl.java:164) > at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1( > LimitedExecutor.java:101) > at org.infinispan.executors.LimitedExecutor.runTasks( > LimitedExecutor.java:144) > at org.infinispan.executors.LimitedExecutor.access$100( > LimitedExecutor.java:33) > at org.infinispan.executors.LimitedExecutor$Runner.run( > LimitedExecutor.java:174) > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Suppressed: java.util.concurrent.ExecutionException: > org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: > Node sombrero-25286 was suspected > at java.util.concurrent.CompletableFuture.reportGet( > CompletableFuture.java:357) > at java.util.concurrent.CompletableFuture.get( > CompletableFuture.java:1915) > at org.infinispan.util.concurrent.CompletableFutures. > await(CompletableFutures.java:82) > at org.infinispan.remoting.rpc.RpcManagerImpl.blocking( > RpcManagerImpl.java:260) > ... 10 more > [CIRCULAR REFERENCE:org.infinispan.remoting.transport.jgroups.SuspectException: > ISPN000400: Node sombrero-25286 was suspected] > > It does not happen with 9.2.0.Final and prevents from using ISPN embedded > with logback. Do you want me to file an issue ? > > 2018-04-18 11:45 GMT+02:00 Thomas SEGISMONT : > >> Hi folks, >> >> Sorry I've been busy on other things and couldn't get back to you earlier. >> >> I tried running vertx-infinispan test suite with 9.2.1.Final today. There >> are some problems still but I can't say which ones yet because I hit: >> https://jira.qos.ch/browse/LOGBACK-1027 >> >> We use logback for test logs and all I get is: >> >> 2018-04-18 11:37:46,678 [stateTransferExecutor-thread--p4453-t24] ERROR >> o.i.executors.LimitedExecutor - Exception in task >> java.lang.StackOverflowError: null >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:54) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:60) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:72) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:60) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:72) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:60) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:72) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:60) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:72) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:60) >> at ch.qos.logback.classic.spi.ThrowableProxy.(ThrowablePr >> oxy.java:72) >> ... so on so forth >> >> I will run the suite again without logback and tell you what the actual >> problem is. >> >> Regards, >> Thomas >> >> 2018-03-27 11:15 GMT+02:00 Pedro Ruivo : >> >>> JIRA: https://issues.jboss.org/browse/ISPN-8994 >>> >>> On 27-03-2018 10:08, Pedro Ruivo wrote: >>> > >>> > >>> > On 27-03-2018 09:03, Sebastian Laskawiec wrote: >>> >> At the moment, the cluster health status checker enumerates all >>> caches >>> >> in the cache manager [1] and checks whether those cashes are running >>> >> and not in degraded more [2]. >>> >> >>> >> I'm not sure how counter caches have been implemented. One thing is >>> >> for sure - they should be taken into account in this loop [3]. >>> > >>> > The private caches aren't listed by CacheManager.getCacheNames(). We >>> > have to check them via InternalCacheRegistry.getInternalCacheNames(). >>> > >>> > I'll open a JIRA if you don't mind :) >>> > >>> >> >>> >> [1] >>> >> https://github.com/infinispan/infinispan/blob/master/core/sr >>> c/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 >>> >> >>> >> [2] >>> >> https://github.com/infinispan/infinispan/blob/master/core/sr >>> c/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 >>> >> >>> >> [3] >>> >> https://github.com/infinispan/infinispan/blob/master/core/sr >>> c/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180420/902fc731/attachment-0001.html From mudokonman at gmail.com Fri Apr 20 14:10:02 2018 From: mudokonman at gmail.com (William Burns) Date: Fri, 20 Apr 2018 18:10:02 +0000 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> Message-ID: On Fri, Apr 20, 2018 at 9:43 AM Thomas SEGISMONT wrote: > I tried our test suite on a slower machine (iMac from 2011). It passes > consistently there. > > On my laptop, I keep seeing this from time to time (in different tests): > > 2018-04-19T19:53:09.513 WARN [Context=org.infinispan.LOCKS]ISPN000320: > After merge (or coordinator change), cache still hasn't recovered a > majority of members and must stay in degraded mode. Current members are > [sombrero-19385], lost members are [sombrero-42917], stable members are > [sombrero-42917, sombrero-19385] > I would expect the nodes to be leaving gracefully, which shouldn't cause a merge. I am not sure how your test is producing that. Can you produce a TRACE log and a JIRA for it? However if there is a merge, if you go down a single node it will always be in DEGRADED mode, when using partition handling. This is due to not having a simple majority as described in http://infinispan.org/docs/stable/user_guide/user_guide.html#partition_handling > > It happens when we shutdown nodes one after the other (even when waiting > for cluster status to be "healthy" plus extra 2 seconds). > > After that the nodes remains blocked in DefaultCacheManager.stop > > 2018-04-19T19:49:29.242 AVERTISSEMENT Thread > Thread[vert.x-worker-thread-5,5,main] has been blocked for 60774 ms, time > limit is 60000 > io.vertx.core.VertxException: Thread blocked > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693) > at > java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) > at > java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729) > at > java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934) > at > org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:688) > at > org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:734) > at > org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:711) > at > io.vertx.ext.cluster.infinispan.InfinispanClusterManager.lambda$leave$5(InfinispanClusterManager.java:285) > at > io.vertx.ext.cluster.infinispan.InfinispanClusterManager$$Lambda$421/578931659.handle(Unknown > Source) > at > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:265) > at io.vertx.core.impl.ContextImpl$$Lambda$27/1330754528.run(Unknown > Source) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.lang.Thread.run(Thread.java:748) > This looks like the exact issue that Radim mentioned with https://issues.jboss.org/browse/ISPN-8859. > > > > 2018-04-18 17:00 GMT+02:00 Thomas SEGISMONT : > >> So here's the Circular Referenced Suppressed Exception >> >> [stateTransferExecutor-thread--p221-t33] 2018-04-18T16:15:06.662+02:00 >> WARN [org.infinispan.statetransfer.InboundTransferTask] ISPN000210: Failed >> to request state of cache __vertx.subs from node sombrero-25286, segments >> {0} >> org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: >> Node sombrero-25286 was suspected >> at >> org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33) >> at >> org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31) >> at >> org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17) >> at >> org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23) >> at >> org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:51) >> at >> org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.addRequest(JGroupsTransport.java:921) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeCommand(JGroupsTransport.java:815) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeCommand(JGroupsTransport.java:123) >> at >> org.infinispan.remoting.rpc.RpcManagerImpl.invokeCommand(RpcManagerImpl.java:138) >> at >> org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134) >> at >> org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113) >> at >> org.infinispan.conflict.impl.StateReceiverImpl$SegmentRequest.lambda$requestState$2(StateReceiverImpl.java:164) >> at >> org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:101) >> at >> org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) >> at >> org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) >> at >> org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >> at java.lang.Thread.run(Thread.java:748) >> Suppressed: java.util.concurrent.ExecutionException: >> org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: >> Node sombrero-25286 was suspected >> at >> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) >> at >> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) >> at >> org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82) >> at >> org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:260) >> ... 10 more >> [CIRCULAR >> REFERENCE:org.infinispan.remoting.transport.jgroups.SuspectException: >> ISPN000400: Node sombrero-25286 was suspected] >> >> It does not happen with 9.2.0.Final and prevents from using ISPN embedded >> with logback. Do you want me to file an issue ? >> >> 2018-04-18 11:45 GMT+02:00 Thomas SEGISMONT : >> >>> Hi folks, >>> >>> Sorry I've been busy on other things and couldn't get back to you >>> earlier. >>> >>> I tried running vertx-infinispan test suite with 9.2.1.Final today. >>> There are some problems still but I can't say which ones yet because I hit: >>> https://jira.qos.ch/browse/LOGBACK-1027 >>> >>> We use logback for test logs and all I get is: >>> >>> 2018-04-18 11:37:46,678 [stateTransferExecutor-thread--p4453-t24] ERROR >>> o.i.executors.LimitedExecutor - Exception in task >>> java.lang.StackOverflowError: null >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:54) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:60) >>> at >>> ch.qos.logback.classic.spi.ThrowableProxy.(ThrowableProxy.java:72) >>> ... so on so forth >>> >>> I will run the suite again without logback and tell you what the actual >>> problem is. >>> >>> Regards, >>> Thomas >>> >>> 2018-03-27 11:15 GMT+02:00 Pedro Ruivo : >>> >>>> JIRA: https://issues.jboss.org/browse/ISPN-8994 >>>> >>>> On 27-03-2018 10:08, Pedro Ruivo wrote: >>>> > >>>> > >>>> > On 27-03-2018 09:03, Sebastian Laskawiec wrote: >>>> >> At the moment, the cluster health status checker enumerates all >>>> caches >>>> >> in the cache manager [1] and checks whether those cashes are running >>>> >> and not in degraded more [2]. >>>> >> >>>> >> I'm not sure how counter caches have been implemented. One thing is >>>> >> for sure - they should be taken into account in this loop [3]. >>>> > >>>> > The private caches aren't listed by CacheManager.getCacheNames(). We >>>> > have to check them via InternalCacheRegistry.getInternalCacheNames(). >>>> > >>>> > I'll open a JIRA if you don't mind :) >>>> > >>>> >> >>>> >> [1] >>>> >> >>>> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 >>>> >> >>>> >> [2] >>>> >> >>>> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 >>>> >> >>>> >> [3] >>>> >> >>>> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24 >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180420/602bf4df/attachment.html From tsegismont at gmail.com Mon Apr 23 04:10:24 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Mon, 23 Apr 2018 10:10:24 +0200 Subject: [infinispan-dev] 9.2 EmbeddedCacheManager blocked at shutdown In-Reply-To: References: <88386d07-1762-c077-4980-d7322bd57644@infinispan.org> <8d316c36-9be8-33ae-d162-166b71c9907e@infinispan.org> <6c764138-fe92-3a05-b351-d8db086c866c@infinispan.org> Message-ID: Hi Will, I will create the JIRA and provide the TRACE level logs as soon as possible. Thanks for the update. 2018-04-20 20:10 GMT+02:00 William Burns : > On Fri, Apr 20, 2018 at 9:43 AM Thomas SEGISMONT > wrote: > >> I tried our test suite on a slower machine (iMac from 2011). It passes >> consistently there. >> >> On my laptop, I keep seeing this from time to time (in different tests): >> >> 2018-04-19T19:53:09.513 WARN [Context=org.infinispan.LOCKS]ISPN000320: >> After merge (or coordinator change), cache still hasn't recovered a >> majority of members and must stay in degraded mode. Current members are >> [sombrero-19385], lost members are [sombrero-42917], stable members are >> [sombrero-42917, sombrero-19385] >> > > I would expect the nodes to be leaving gracefully, which shouldn't cause a > merge. I am not sure how your test is producing that. Can you produce a > TRACE log and a JIRA for it? > > However if there is a merge, if you go down a single node it will always > be in DEGRADED mode, when using partition handling. This is due to not > having a simple majority as described in http://infinispan.org/docs/ > stable/user_guide/user_guide.html#partition_handling > > >> >> It happens when we shutdown nodes one after the other (even when waiting >> for cluster status to be "healthy" plus extra 2 seconds). >> >> After that the nodes remains blocked in DefaultCacheManager.stop >> >> 2018-04-19T19:49:29.242 AVERTISSEMENT Thread >> Thread[vert.x-worker-thread-5,5,main] has been blocked for 60774 ms, >> time limit is 60000 >> io.vertx.core.VertxException: Thread blocked >> at sun.misc.Unsafe.park(Native Method) >> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) >> at java.util.concurrent.CompletableFuture$Signaller. >> block(CompletableFuture.java:1693) >> at java.util.concurrent.ForkJoinPool.managedBlock( >> ForkJoinPool.java:3323) >> at java.util.concurrent.CompletableFuture.waitingGet( >> CompletableFuture.java:1729) >> at java.util.concurrent.CompletableFuture.join( >> CompletableFuture.java:1934) >> at org.infinispan.manager.DefaultCacheManager.terminate( >> DefaultCacheManager.java:688) >> at org.infinispan.manager.DefaultCacheManager.stopCaches( >> DefaultCacheManager.java:734) >> at org.infinispan.manager.DefaultCacheManager.stop( >> DefaultCacheManager.java:711) >> at io.vertx.ext.cluster.infinispan.InfinispanClusterManager. >> lambda$leave$5(InfinispanClusterManager.java:285) >> at io.vertx.ext.cluster.infinispan.InfinispanClusterManager$$ >> Lambda$421/578931659.handle(Unknown Source) >> at io.vertx.core.impl.ContextImpl.lambda$ >> executeBlocking$1(ContextImpl.java:265) >> at io.vertx.core.impl.ContextImpl$$Lambda$27/1330754528.run(Unknown >> Source) >> >> at java.util.concurrent.ThreadPoolExecutor.runWorker( >> ThreadPoolExecutor.java:1149) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run( >> ThreadPoolExecutor.java:624) >> at io.netty.util.concurrent.FastThreadLocalRunnable.run( >> FastThreadLocalRunnable.java:30) >> at java.lang.Thread.run(Thread.java:748) >> > > This looks like the exact issue that Radim mentioned with > https://issues.jboss.org/browse/ISPN-8859. > > >> >> >> >> 2018-04-18 17:00 GMT+02:00 Thomas SEGISMONT : >> >>> So here's the Circular Referenced Suppressed Exception >>> >>> [stateTransferExecutor-thread--p221-t33] 2018-04-18T16:15:06.662+02:00 >>> WARN [org.infinispan.statetransfer.InboundTransferTask] ISPN000210: >>> Failed to request state of cache __vertx.subs from node sombrero-25286, >>> segments {0} >>> org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: >>> Node sombrero-25286 was suspected >>> at org.infinispan.remoting.transport.ResponseCollectors. >>> remoteNodeSuspected(ResponseCollectors.java:33) >>> at org.infinispan.remoting.transport.impl.SingleResponseCollector. >>> targetNotFound(SingleResponseCollector.java:31) >>> at org.infinispan.remoting.transport.impl.SingleResponseCollector. >>> targetNotFound(SingleResponseCollector.java:17) >>> at org.infinispan.remoting.transport.ValidSingleResponseCollector. >>> addResponse(ValidSingleResponseCollector.java:23) >>> at org.infinispan.remoting.transport.impl.SingleTargetRequest. >>> receiveResponse(SingleTargetRequest.java:51) >>> at org.infinispan.remoting.transport.impl. >>> SingleTargetRequest.onNewView(SingleTargetRequest.java:42) >>> at org.infinispan.remoting.transport.jgroups. >>> JGroupsTransport.addRequest(JGroupsTransport.java:921) >>> at org.infinispan.remoting.transport.jgroups.JGroupsTransport. >>> invokeCommand(JGroupsTransport.java:815) >>> at org.infinispan.remoting.transport.jgroups.JGroupsTransport. >>> invokeCommand(JGroupsTransport.java:123) >>> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeCommand( >>> RpcManagerImpl.java:138) >>> at org.infinispan.statetransfer.InboundTransferTask.startTransfer( >>> InboundTransferTask.java:134) >>> at org.infinispan.statetransfer.InboundTransferTask.requestSegments( >>> InboundTransferTask.java:113) >>> at org.infinispan.conflict.impl.StateReceiverImpl$ >>> SegmentRequest.lambda$requestState$2(StateReceiverImpl.java:164) >>> at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1( >>> LimitedExecutor.java:101) >>> at org.infinispan.executors.LimitedExecutor.runTasks( >>> LimitedExecutor.java:144) >>> at org.infinispan.executors.LimitedExecutor.access$100( >>> LimitedExecutor.java:33) >>> at org.infinispan.executors.LimitedExecutor$Runner.run( >>> LimitedExecutor.java:174) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker( >>> ThreadPoolExecutor.java:1149) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run( >>> ThreadPoolExecutor.java:624) >>> at java.lang.Thread.run(Thread.java:748) >>> Suppressed: java.util.concurrent.ExecutionException: >>> org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: >>> Node sombrero-25286 was suspected >>> at java.util.concurrent.CompletableFuture.reportGet( >>> CompletableFuture.java:357) >>> at java.util.concurrent.CompletableFuture.get( >>> CompletableFuture.java:1915) >>> at org.infinispan.util.concurrent.CompletableFutures. >>> await(CompletableFutures.java:82) >>> at org.infinispan.remoting.rpc.RpcManagerImpl.blocking( >>> RpcManagerImpl.java:260) >>> ... 10 more >>> [CIRCULAR REFERENCE:org.infinispan.remoting.transport.jgroups.SuspectException: >>> ISPN000400: Node sombrero-25286 was suspected] >>> >>> It does not happen with 9.2.0.Final and prevents from using ISPN >>> embedded with logback. Do you want me to file an issue ? >>> >>> 2018-04-18 11:45 GMT+02:00 Thomas SEGISMONT : >>> >>>> Hi folks, >>>> >>>> Sorry I've been busy on other things and couldn't get back to you >>>> earlier. >>>> >>>> I tried running vertx-infinispan test suite with 9.2.1.Final today. >>>> There are some problems still but I can't say which ones yet because I hit: >>>> https://jira.qos.ch/browse/LOGBACK-1027 >>>> >>>> We use logback for test logs and all I get is: >>>> >>>> 2018-04-18 11:37:46,678 [stateTransferExecutor-thread--p4453-t24] >>>> ERROR o.i.executors.LimitedExecutor - Exception in task >>>> java.lang.StackOverflowError: null >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:54) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:60) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:72) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:60) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:72) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:60) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:72) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:60) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:72) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:60) >>>> at ch.qos.logback.classic.spi.ThrowableProxy.( >>>> ThrowableProxy.java:72) >>>> ... so on so forth >>>> >>>> I will run the suite again without logback and tell you what the actual >>>> problem is. >>>> >>>> Regards, >>>> Thomas >>>> >>>> 2018-03-27 11:15 GMT+02:00 Pedro Ruivo : >>>> >>>>> JIRA: https://issues.jboss.org/browse/ISPN-8994 >>>>> >>>>> On 27-03-2018 10:08, Pedro Ruivo wrote: >>>>> > >>>>> > >>>>> > On 27-03-2018 09:03, Sebastian Laskawiec wrote: >>>>> >> At the moment, the cluster health status checker enumerates all >>>>> caches >>>>> >> in the cache manager [1] and checks whether those cashes are >>>>> running >>>>> >> and not in degraded more [2]. >>>>> >> >>>>> >> I'm not sure how counter caches have been implemented. One thing is >>>>> >> for sure - they should be taken into account in this loop [3]. >>>>> > >>>>> > The private caches aren't listed by CacheManager.getCacheNames(). We >>>>> > have to check them via InternalCacheRegistry. >>>>> getInternalCacheNames(). >>>>> > >>>>> > I'll open a JIRA if you don't mind :) >>>>> > >>>>> >> >>>>> >> [1] >>>>> >> https://github.com/infinispan/infinispan/blob/master/core/ >>>>> src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22 >>>>> >> >>>>> >> [2] >>>>> >> https://github.com/infinispan/infinispan/blob/master/core/ >>>>> src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25 >>>>> >> >>>>> >> [3] >>>>> >> https://github.com/infinispan/infinispan/blob/master/core/ >>>>> src/main/java/org/infinispan/health/impl/ClusterHealthImpl. >>>>> java#L23-L24 >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180423/5e54e815/attachment-0001.html From sergey.chernolyas at gmail.com Mon Apr 23 04:26:56 2018 From: sergey.chernolyas at gmail.com (Sergey Chernolyas) Date: Mon, 23 Apr 2018 11:26:56 +0300 Subject: [infinispan-dev] Search keys by query Message-ID: *Hi! * *I want ask about search keys. For example, I have a complex key and the complex key (POJO) have a field ?type?. It is logical if I find all keys with required type by query. Now query for complex keys not work. Method ?list()? return empty list(). Is the feature implementable?* -- --------------------- With best regards, Sergey Chernolyas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180423/db772fd4/attachment.html From anistor at redhat.com Mon Apr 23 05:23:47 2018 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 23 Apr 2018 12:23:47 +0300 Subject: [infinispan-dev] Search keys by query In-Reply-To: References: Message-ID: <3866e43a-d5dc-59b6-208e-a6852ee7830a@redhat.com> Hi Sergey, keys are just keys. Lookup by key works only if you know the key in advance, be it a simple or complex key. Keys are not indexed. So no, searching for keys does not work and there is no plan to support that. It's one of the many things Infinispan cannot do because it is not a relational database and we do no plan to become one :). But there are ways to overcome this limitation. You already de-normalize your data when placing it in the grid, because Infinispan does not manage relations. During this process you should copy relevant properties of the key into the value itself if you intend to search by those properties. Adrian On 04/23/2018 11:26 AM, Sergey Chernolyas wrote: > *Hi! * > *I want ask about search keys. For example, I have a complex key and > the complex key (POJO) have a field ?type?.? It is logical if I find > all keys with required type by query. Now query for complex keys not > work. Method ?list()? return empty list(). Is the feature implementable?* > > -- > --------------------- > > With best regards, Sergey Chernolyas > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180423/d7514767/attachment.html From galder at redhat.com Mon Apr 30 11:09:12 2018 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 30 Apr 2018 15:09:12 +0000 Subject: [infinispan-dev] (no subject) Message-ID: Hi Sebastian, Did you mention something about x-site not working on master? The reason I ask is cos I was trying to create a state transfer test for [1] and there are some odds happening. In my test, I start LON site configured with NYC but NYC is not up yet. [1] https://issues.jboss.org/browse/ISPN-9111 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180430/4e36b539/attachment.html From galder at redhat.com Mon Apr 30 11:16:26 2018 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 30 Apr 2018 15:16:26 +0000 Subject: [infinispan-dev] (no subject) In-Reply-To: References: Message-ID: Ups, sent too early! So, the NYC site is not up, so I see in the logs: 2018-04-30 16:53:49,411 ERROR [org.infinispan.test.fwk.TEST_RELAY2] (testng-ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]:[]) ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]-NodeA-55452: no route to NYC: dropping message But the put hangs and never completes [2]. I've traced the code and [3] never gets called, with no events. I think this might be a JGroups bug because ChannelCallbacks implements UpHandler, but JChannel never deals with a receiver that might implement UpHandler, so it never delivers site unreachable message up the stack. @Bela? Cheers, Galder [2] https://gist.github.com/galderz/ada0e9317889eaa272845430b8d36ba1 [3] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/remoting/transport/jgroups/JGroupsTransport.java#L1366 [4] https://github.com/belaban/JGroups/blob/master/src/org/jgroups/JChannel.java#L953-L983 On Mon, Apr 30, 2018 at 5:09 PM Galder Zamarreno wrote: > Hi Sebastian, > > Did you mention something about x-site not working on master? > > The reason I ask is cos I was trying to create a state transfer test for > [1] and there are some odds happening. > > In my test, I start LON site configured with NYC but NYC is not up yet. > > [1] https://issues.jboss.org/browse/ISPN-9111 > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180430/b6c1e983/attachment.html From galder at redhat.com Mon Apr 30 11:26:28 2018 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 30 Apr 2018 15:26:28 +0000 Subject: [infinispan-dev] (no subject) In-Reply-To: References: Message-ID: Actually Sebastian, I don't think this is your problem because your site configs are ASYNC. This only appears when a site is configured with SYNC, which is when a response is waited for. Cheers, On Mon, Apr 30, 2018 at 5:18 PM Galder Zamarreno wrote: > Ups, sent too early! So, the NYC site is not up, so I see in the logs: > > 2018-04-30 16:53:49,411 ERROR [org.infinispan.test.fwk.TEST_RELAY2] > (testng-ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]:[]) > ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]-NodeA-55452: > no route to NYC: dropping message > > But the put hangs and never completes [2]. I've traced the code and [3] > never gets called, with no events. > > I think this might be a JGroups bug because ChannelCallbacks > implements UpHandler, but JChannel never deals with a receiver that might > implement UpHandler, so it never delivers site unreachable message up the > stack. > > @Bela? > > Cheers, > Galder > > [2] https://gist.github.com/galderz/ada0e9317889eaa272845430b8d36ba1 > [3] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/remoting/transport/jgroups/JGroupsTransport.java#L1366 > [4] > https://github.com/belaban/JGroups/blob/master/src/org/jgroups/JChannel.java#L953-L983 > > > > On Mon, Apr 30, 2018 at 5:09 PM Galder Zamarreno > wrote: > >> Hi Sebastian, >> >> Did you mention something about x-site not working on master? >> >> The reason I ask is cos I was trying to create a state transfer test for >> [1] and there are some odds happening. >> >> In my test, I start LON site configured with NYC but NYC is not up yet. >> >> [1] https://issues.jboss.org/browse/ISPN-9111 >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180430/a6993b1e/attachment.html From slaskawi at redhat.com Mon Apr 30 21:46:30 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 01 May 2018 01:46:30 +0000 Subject: [infinispan-dev] (no subject) In-Reply-To: References: Message-ID: Hey Galder, I haven't sent any email since I didn't have enough time to create a proper reproducer or investigate what was going on. During the summit work, I switched from a custom build of 9.2.1.Final to the latest master. This resulted in all sites going up and down. I was struggling for 5 hours and I couldn't stabilize it. Then, 30 mins before rehearsal session I decided to revert back to 9.2.1.Final. I wish I had more clues. Maybe I haven't done proper migration or used too short timeouts for some FD* protocol. It's hard to say. Thanks, Sebastian On Mon, Apr 30, 2018 at 5:16 PM Galder Zamarreno wrote: > Ups, sent too early! So, the NYC site is not up, so I see in the logs: > > 2018-04-30 16:53:49,411 ERROR [org.infinispan.test.fwk.TEST_RELAY2] > (testng-ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]:[]) > ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]-NodeA-55452: > no route to NYC: dropping message > > But the put hangs and never completes [2]. I've traced the code and [3] > never gets called, with no events. > > I think this might be a JGroups bug because ChannelCallbacks > implements UpHandler, but JChannel never deals with a receiver that might > implement UpHandler, so it never delivers site unreachable message up the > stack. > > @Bela? > > Cheers, > Galder > > [2] https://gist.github.com/galderz/ada0e9317889eaa272845430b8d36ba1 > [3] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/remoting/transport/jgroups/JGroupsTransport.java#L1366 > [4] > https://github.com/belaban/JGroups/blob/master/src/org/jgroups/JChannel.java#L953-L983 > > > > On Mon, Apr 30, 2018 at 5:09 PM Galder Zamarreno > wrote: > >> Hi Sebastian, >> >> Did you mention something about x-site not working on master? >> >> The reason I ask is cos I was trying to create a state transfer test for >> [1] and there are some odds happening. >> >> In my test, I start LON site configured with NYC but NYC is not up yet. >> >> [1] https://issues.jboss.org/browse/ISPN-9111 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180501/8991c80c/attachment-0001.html From slaskawi at redhat.com Mon Apr 30 22:39:25 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 01 May 2018 02:39:25 +0000 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Fair enough Tom. Thanks for explanation. One more request - would you guys be OK with me adding a node-identifier="${ jboss.node.name}" to the transaction subsystem template [1]? This way we wouldn't need to copy it into Infinispan (since we need to set it). [1] https://github.com/wildfly/wildfly/blob/master/transactions/src/main/resources/subsystem-templates/transactions.xml#L6 On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson wrote: > On 18 April 2018 at 14:07, Sebastian Laskawiec > wrote: > >> Hey Tom, >> >> Comments inlined. >> >> Thanks, >> Sebastian >> >> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson >> wrote: >> >>> >>> >>> On 16 April 2018 at 09:31, <> wrote: >>> >>>> Adding +WildFly Dev to the loop >>> >>> >>>> >>>> Thanks for the explanation Rado. >>>> >>>> TL;DR: A while ago Sanne pointed out that we do not set >>>> `node-identifier` >>>> in transaction subsystem by default. The default value for the >>>> `node-identifier` attribute it `1`. Not setting this attribute might >>>> cause >>>> problems in transaction recovery. Perhaps we could follow Rado's idea >>>> and >>>> set it to node name by default? >>>> >>> Indeed - it would cause serious data integrity problems if a non-unique >>> node-identifier is used. >>> >>>> Some more comments inlined. >>>> >>>> Thanks, >>>> Sebastian >>>> >>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>> wrote: >>>> >>>> > Hi Sebastian, >>>> > >>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>> > wrote: >>>> > > Hey Rado, Paul, >>>> > > >>>> > > I started looking into this issue and it turned out that WF >>>> subsystem >>>> > > template doesn't provide `node-identifier` attribute [1]. >>>> > >>>> > I assume you mean that the default WildFly server profiles do not >>>> >>> > explicitly define the attribute. Right ? thus the value defaults in >>> >>> >>>> > the model to "1" >>>> > >>>> > >>>> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 >>>> > which sole intention seems to be to log a warning on boot if the value >>>> > is unchanged. >>>> > Why they decided on a constant that will be inherently not unique as >>>> > opposed to defaulting to the node name (which we already require to be >>>> > unique) as clustering node name or undertow instance-id does, is >>>> > unclear to me. >>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >>>> > >>>> >>>> In OpenShift environment we could set it to `hostname`. This is >>>> guaranteed >>>> to be unique in whole OpenShift cluster. >>>> >>>> >>>> We do this too in EAP images. >>> To Rado's point, the default is "1" so we can print the warning to alert >>> people they are misconfigured - it seems to be working :) >>> >> >> This is the point. From my understanding, if we set it to node name >> (instead of "1"), we could make it always work correctly. We could even >> remove the code that emits the warning (since the node name needs to be >> unique). >> >> To sum it up - if we decided to proceed this way, there would be no >> requirement of setting the node-identifier at all. >> > > For OpenShift you are right there is no requirement for someone to change > the node-identifier from the podname and so that is why EAP images do that. > > For bare-metal it is different as there can be two servers on the same > machine so they were configured to use the hostname as they node-identifier > then if they were also connected to the same resource managers or the same > object store they would interfere with each other. > > >> >> >>> >>> >>>> > >>> >>> >>>> > > I'm not sure if you guys are the right people to ask, but is it >>>> safe to >>>> > > leave it set to default? Or shall I override our Infinispan >>>> templates and >>>> > > add this parameter (as I mentioned before, in OpenShift this I >>>> wanted to >>>> > set >>>> > > it as Pod name trimmed to the last 23 chars since this is the >>>> limit). >>>> >>> Putting a response to this in line - I am not certain who originally >>> proposed this. >>> >>> You must use a globally unique node-identifier. If you are certain the >>> last 23 characters guarantee that it would be valid - if there is a chance >>> they are not unique it is not valid to trim. >>> >> >> If that's not an issue, again, we could use the same limit as we have for >> node name. >> >> >>> >>> >>> >>>> > >>> >>> >>>> > It is not safe to leave it set to "1" as that results in inconsistent >>>> > processing of transaction recovery. >>>> > IIUC we already set it to the node name for both EAP and JDG >>>> > >>>> > >>>> https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 >>>> > >>>> > >>>> https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 >>>> >>> > which in turn defaults to the pod name ? so which profiles are we >>> >>> >>>> > talking about here? >>>> > >>>> >>>> Granted, we set it by default in CCT Modules. However in Infinispan we >>>> just >>>> grab provided transaction subsystem when rendering full configuration >>>> from >>>> featurepacks: >>>> >>>> https://github.com/infinispan/infinispan/blob/master/server/integration/feature-pack/src/main/resources/configuration/standalone/subsystems-cloud.xml#L19 >>>> >>>> The default configuration XML doesn't contain the `node-identifier` >>>> attribute. I can add it manually in the cloud.xml but I believe the >>>> right >>>> approach is to modify the transaction subsystem. >>>> >>>> >>>> > Rado >>>> > >>>> > > Thanks, >>>> > > Seb >>>> > > >>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>> > > >>>> > > >>>> >>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>> infinispan.org> >>>> > > wrote: >>>> > >> >>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>> redhat.com> >>> >>> >>>> > wrote: >>>> > >> > Thanks for looking into it Sanne. Of course, we should add it >>>> (it can >>>> > be >>>> > >> > set >>>> > >> > to the same name as hostname since those are unique in >>>> Kubernetes). >>>> > >> > >>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >>>> > >> > >>>> > >> > Thanks again! >>>> > >> > Seb >>>> > >> >>>> > >> Thanks Sebastian! >>>> > >> >>>> > >> > >>>> >>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>> infinispan.org> >>> >>> >>>> > >> > wrote: >>>> > >> >> >>>> > >> >> Hi all, >>>> > >> >> >>>> > >> >> I've started to use the Infinispan Openshift Template and was >>>> > browsing >>>> > >> >> through the errors and warnings this produces. >>>> > >> >> >>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier property >>>> is set >>>> > >> >> to the default value. Please make sure it is unique." being >>>> produced >>>> > >> >> by the transaction system. >>>> > >> >> >>>> > >> >> The node id is usually not needed for developer's convenience >>>> and >>>> > >> >> assuming there's a single node in "dev mode", yet clearly the >>>> > >> >> Infinispan template is meant to work with multiple nodes >>>> running so >>>> > >> >> this warning seems concerning. >>>> > >> >> >>>> > >> >> I'm not sure what the impact is on the transaction manager so I >>>> asked >>>> > >> >> on the Narayana forums; Tom pointed me to some thourough design >>>> > >> >> documents and also suggested the EAP image does set the node >>>> > >> >> identifier: >>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>> > >> >> >>>> > >> >> WDYT? we probably want the Infinispan template to set this as >>>> well, >>>> > or >>>> > >> >> silence the warning? >>>> > >> >> >>>> > >> >> Thanks, >>>> > >> >> Sanne >>>> > >> >> _______________________________________________ >>>> > >> >> infinispan-dev mailing list >>>> >>> > >> >> infinispan-dev at lists.jboss.org >>> >>> >>>> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >> > >>>> > >> > >>>> > >> > _______________________________________________ >>>> > >> > infinispan-dev mailing list >>>> >>> > >> > infinispan-dev at lists.jboss.org >>> >>> >>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >> _______________________________________________ >>>> > >> infinispan-dev mailing list >>>> > >> infinispan-dev at lists.jboss.org >>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >>>> -------------- next part -------------- >>>> An HTML attachment was scrubbed... >>>> URL: >>>> http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180416/65962cf1/attachment-0001.html >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180501/a3d31a5d/attachment-0001.html