From ttarrant at redhat.com Wed May 2 04:06:44 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 2 May 2018 10:06:44 +0200 Subject: [infinispan-dev] Infinispan 9.2.2.Final and 9.3.0.Alpha1 are out Message-ID: We have two releases to announce: first of all is 9.2.2.Final which introduces a second-level cache provider for the upcoming Hibernate ORM 5.3 as well as numerous bugfixes. [1] Next is 9.3.0.Alpha1 which is the first iteration of our next release. [2] The main item here, aside from bugfixes and preparation work for upcoming features, is the upgrade of our server component to WildFly 12. Go and get them on our download page [3] Tristan [1] https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799&version=12337245 [2] https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799&version=12337078 [3] https://infinispan.org/download/ From ttarrant at redhat.com Wed May 2 04:24:28 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 2 May 2018 10:24:28 +0200 Subject: [infinispan-dev] Infinispan chat moves to Zulip Message-ID: For over 9 years Infinispan has used IRC for real-time interaction between the development team, contributors and users. While IRC has served us well over the years, we decided that the time has come to start using something better. After trying out a few "candidates" we have settled on Zulip. Zulip gives us many improvements over IRC and over many of the other alternatives out there. In particular: * multiple conversation streams * further filtered with the use of topics * organization management to organize users into groups * it's open source So, if you want to chat with us, join us on the Infinispan Zulip Organization [1] Tristan [1] https://infinispan.zulipchat.com From galder at redhat.com Thu May 3 12:49:06 2018 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 03 May 2018 16:49:06 +0000 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP Message-ID: Hey Sebastian, I'm trying to update simple tutorials to Infinispan 9.2.2.Final but Kubernetes demo does not seem to be working. I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and build fails. Error is: > error: build error: image "java:8-jre-alpine" must specify a user that is numeric and within the range of allowed users Cheers, Galder -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180503/8ff65865/attachment.html From galder at redhat.com Fri May 4 03:45:18 2018 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 04 May 2018 07:45:18 +0000 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP In-Reply-To: References: Message-ID: Hey Sebastian, I've been checking with Clement and this might be due to OpenShift not allowing that base image for source builds. It seems like that to do S2I you need a base image with a certain user (I think that's 1001) and neither the java/ nor the fabric8/ ones do that. Clement mentioned redhat-openjdk-18/openjdk18-openshift images might do that but those I think are behind VPN or require some for of login. Clement also mentioned this might work with minishift, but I've not tried yet. The alternative might be to switch that example to use binary builds and adjust instructions for OpenShift and plain Kubernetes. Cheers, Galder On Thu, May 3, 2018 at 6:49 PM Galder Zamarreno wrote: > Hey Sebastian, > > I'm trying to update simple tutorials to Infinispan 9.2.2.Final but > Kubernetes demo does not seem to be working. > > I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and build > fails. Error is: > > > error: build error: image "java:8-jre-alpine" must specify a user that > is numeric and within the range of allowed users > > Cheers, > Galder > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180504/3d8ec6a3/attachment.html From rvansa at redhat.com Fri May 4 04:15:38 2018 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 4 May 2018 10:15:38 +0200 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP In-Reply-To: References: Message-ID: <9b8e78af-4c4d-3224-92ae-f09a3b5ca18c@redhat.com> I think you can override this limitation using |oc adm policy add-scc-to-user anyuid developer | though it'd not be a recommended setting for production use... R. On 05/04/2018 09:45 AM, Galder Zamarreno wrote: > Hey Sebastian, > > I've been checking with Clement and this might be due to OpenShift not > allowing that base image for source builds. > > It seems like that to do S2I you need a base image with a certain user > (I think that's 1001) and neither the java/ nor the fabric8/ ones do > that. Clement mentioned?redhat-openjdk-18/openjdk18-openshift images > might do that but those I think are behind VPN or require some for of > login. Clement also mentioned this might work with minishift, but I've > not tried yet. > > The alternative might be to switch that example to use binary builds > and adjust instructions for OpenShift and plain Kubernetes. > > Cheers, > Galder > > On Thu, May 3, 2018 at 6:49 PM Galder Zamarreno > wrote: > > Hey Sebastian, > > I'm trying to update simple tutorials to Infinispan 9.2.2.Final > but Kubernetes demo does not seem to be working. > > I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and > build fails. Error is: > > > error: build error: image "java:8-jre-alpine" must specify a > user that is numeric and within the range of allowed users > > Cheers, > Galder > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From galder at redhat.com Fri May 4 10:29:30 2018 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 04 May 2018 14:29:30 +0000 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP In-Reply-To: <9b8e78af-4c4d-3224-92ae-f09a3b5ca18c@redhat.com> References: <9b8e78af-4c4d-3224-92ae-f09a3b5ca18c@redhat.com> Message-ID: Still same error after applying that to developer On Fri, May 4, 2018 at 10:18 AM Radim Vansa wrote: > I think you can override this limitation using > > |oc adm policy add-scc-to-user anyuid developer | > > though it'd not be a recommended setting for production use... > > R. > > > > On 05/04/2018 09:45 AM, Galder Zamarreno wrote: > > Hey Sebastian, > > > > I've been checking with Clement and this might be due to OpenShift not > > allowing that base image for source builds. > > > > It seems like that to do S2I you need a base image with a certain user > > (I think that's 1001) and neither the java/ nor the fabric8/ ones do > > that. Clement mentioned redhat-openjdk-18/openjdk18-openshift images > > might do that but those I think are behind VPN or require some for of > > login. Clement also mentioned this might work with minishift, but I've > > not tried yet. > > > > The alternative might be to switch that example to use binary builds > > and adjust instructions for OpenShift and plain Kubernetes. > > > > Cheers, > > Galder > > > > On Thu, May 3, 2018 at 6:49 PM Galder Zamarreno > > wrote: > > > > Hey Sebastian, > > > > I'm trying to update simple tutorials to Infinispan 9.2.2.Final > > but Kubernetes demo does not seem to be working. > > > > I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and > > build fails. Error is: > > > > > error: build error: image "java:8-jre-alpine" must specify a > > user that is numeric and within the range of allowed users > > > > Cheers, > > Galder > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180504/f52054d7/attachment.html From slaskawi at redhat.com Sun May 6 18:46:27 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Sun, 06 May 2018 22:46:27 +0000 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP In-Reply-To: References: <9b8e78af-4c4d-3224-92ae-f09a3b5ca18c@redhat.com> Message-ID: Have you tried my latest PR, which has been waiting for the review for over 2 months :D https://github.com/infinispan/infinispan-simple-tutorials/pull/42 On Fri, May 4, 2018 at 4:35 PM Galder Zamarreno wrote: > Still same error after applying that to developer > > On Fri, May 4, 2018 at 10:18 AM Radim Vansa wrote: > >> I think you can override this limitation using >> >> |oc adm policy add-scc-to-user anyuid developer | >> >> though it'd not be a recommended setting for production use... >> >> R. >> >> >> >> On 05/04/2018 09:45 AM, Galder Zamarreno wrote: >> > Hey Sebastian, >> > >> > I've been checking with Clement and this might be due to OpenShift not >> > allowing that base image for source builds. >> > >> > It seems like that to do S2I you need a base image with a certain user >> > (I think that's 1001) and neither the java/ nor the fabric8/ ones do >> > that. Clement mentioned redhat-openjdk-18/openjdk18-openshift images >> > might do that but those I think are behind VPN or require some for of >> > login. Clement also mentioned this might work with minishift, but I've >> > not tried yet. >> > >> > The alternative might be to switch that example to use binary builds >> > and adjust instructions for OpenShift and plain Kubernetes. >> > >> > Cheers, >> > Galder >> > >> > On Thu, May 3, 2018 at 6:49 PM Galder Zamarreno > > > wrote: >> > >> > Hey Sebastian, >> > >> > I'm trying to update simple tutorials to Infinispan 9.2.2.Final >> > but Kubernetes demo does not seem to be working. >> > >> > I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and >> > build fails. Error is: >> > >> > > error: build error: image "java:8-jre-alpine" must specify a >> > user that is numeric and within the range of allowed users >> > >> > Cheers, >> > Galder >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180506/d9cfc5ed/attachment-0001.html From slaskawi at redhat.com Sun May 6 19:10:17 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Sun, 06 May 2018 23:10:17 +0000 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Ok, so how about doing the same thing you suggested, but just more explicitly - adding node-identifier="${jboss.node.name*:1*}". This way the bare metal deployment should be happy (since the default is still 1) and we wouldn't need to override it in Infinispan. On Tue, May 1, 2018 at 10:09 AM Tom Jenkinson wrote: > I am not sure - the default should be "1" for the bare metal case so the > warning is reliably triggered but the default can be the pod name for > OpenShift templates that only allow a single instance of the application > server - does that help? > > The file you looked to want to edit is shared by bare metal and other > deployment environments so it would be confusing to set the default to > jboss.node.name there IMO. > > On 1 May 2018 at 03:39, Sebastian Laskawiec wrote: > >> Fair enough Tom. Thanks for explanation. >> >> One more request - would you guys be OK with me adding >> a node-identifier="${jboss.node.name}" to the transaction subsystem >> template [1]? This way we wouldn't need to copy it into Infinispan (since >> we need to set it). >> >> [1] >> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/resources/subsystem-templates/transactions.xml#L6 >> >> On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson >> wrote: >> >>> On 18 April 2018 at 14:07, Sebastian Laskawiec >>> wrote: >>> >>>> Hey Tom, >>>> >>>> Comments inlined. >>>> >>>> Thanks, >>>> Sebastian >>>> >>>> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson >>>> wrote: >>>> >>>>> >>>>> >>>>> On 16 April 2018 at 09:31, <> wrote: >>>>> >>>>>> Adding +WildFly Dev to the loop >>>>> >>>>> >>>>>> >>>>>> Thanks for the explanation Rado. >>>>>> >>>>>> TL;DR: A while ago Sanne pointed out that we do not set >>>>>> `node-identifier` >>>>>> in transaction subsystem by default. The default value for the >>>>>> `node-identifier` attribute it `1`. Not setting this attribute might >>>>>> cause >>>>>> problems in transaction recovery. Perhaps we could follow Rado's idea >>>>>> and >>>>>> set it to node name by default? >>>>>> >>>>> Indeed - it would cause serious data integrity problems if a >>>>> non-unique node-identifier is used. >>>>> >>>>>> Some more comments inlined. >>>>>> >>>>>> Thanks, >>>>>> Sebastian >>>>>> >>>>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>>>> wrote: >>>>>> >>>>>> > Hi Sebastian, >>>>>> > >>>>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>>>> > wrote: >>>>>> > > Hey Rado, Paul, >>>>>> > > >>>>>> > > I started looking into this issue and it turned out that WF >>>>>> subsystem >>>>>> > > template doesn't provide `node-identifier` attribute [1]. >>>>>> > >>>>>> > I assume you mean that the default WildFly server profiles do not >>>>>> >>>>> > explicitly define the attribute. Right ? thus the value defaults in >>>>> >>>>> >>>>>> > the model to "1" >>>>>> > >>>>>> > >>>>>> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 >>>>>> > which sole intention seems to be to log a warning on boot if the >>>>>> value >>>>>> > is unchanged. >>>>>> > Why they decided on a constant that will be inherently not unique as >>>>>> > opposed to defaulting to the node name (which we already require to >>>>>> be >>>>>> > unique) as clustering node name or undertow instance-id does, is >>>>>> > unclear to me. >>>>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >>>>>> > >>>>>> >>>>>> In OpenShift environment we could set it to `hostname`. This is >>>>>> guaranteed >>>>>> to be unique in whole OpenShift cluster. >>>>>> >>>>>> >>>>>> We do this too in EAP images. >>>>> To Rado's point, the default is "1" so we can print the warning to >>>>> alert people they are misconfigured - it seems to be working :) >>>>> >>>> >>>> This is the point. From my understanding, if we set it to node name >>>> (instead of "1"), we could make it always work correctly. We could even >>>> remove the code that emits the warning (since the node name needs to be >>>> unique). >>>> >>>> To sum it up - if we decided to proceed this way, there would be no >>>> requirement of setting the node-identifier at all. >>>> >>> >>> For OpenShift you are right there is no requirement for someone to >>> change the node-identifier from the podname and so that is why EAP images >>> do that. >>> >>> For bare-metal it is different as there can be two servers on the same >>> machine so they were configured to use the hostname as they node-identifier >>> then if they were also connected to the same resource managers or the same >>> object store they would interfere with each other. >>> >>> >>>> >>>> >>>>> >>>>> >>>>>> > >>>>> >>>>> >>>>>> > > I'm not sure if you guys are the right people to ask, but is it >>>>>> safe to >>>>>> > > leave it set to default? Or shall I override our Infinispan >>>>>> templates and >>>>>> > > add this parameter (as I mentioned before, in OpenShift this I >>>>>> wanted to >>>>>> > set >>>>>> > > it as Pod name trimmed to the last 23 chars since this is the >>>>>> limit). >>>>>> >>>>> Putting a response to this in line - I am not certain who originally >>>>> proposed this. >>>>> >>>>> You must use a globally unique node-identifier. If you are certain the >>>>> last 23 characters guarantee that it would be valid - if there is a chance >>>>> they are not unique it is not valid to trim. >>>>> >>>> >>>> If that's not an issue, again, we could use the same limit as we have >>>> for node name. >>>> >>>> >>>>> >>>>> >>>>> >>>>>> > >>>>> >>>>> >>>>>> > It is not safe to leave it set to "1" as that results in >>>>>> inconsistent >>>>>> > processing of transaction recovery. >>>>>> > IIUC we already set it to the node name for both EAP and JDG >>>>>> > >>>>>> > >>>>>> https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 >>>>>> > >>>>>> > >>>>>> https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 >>>>>> >>>>> > which in turn defaults to the pod name ? so which profiles are we >>>>> >>>>> >>>>>> > talking about here? >>>>>> > >>>>>> >>>>>> Granted, we set it by default in CCT Modules. However in Infinispan >>>>>> we just >>>>>> grab provided transaction subsystem when rendering full configuration >>>>>> from >>>>>> featurepacks: >>>>>> >>>>>> https://github.com/infinispan/infinispan/blob/master/server/integration/feature-pack/src/main/resources/configuration/standalone/subsystems-cloud.xml#L19 >>>>>> >>>>>> The default configuration XML doesn't contain the `node-identifier` >>>>>> attribute. I can add it manually in the cloud.xml but I believe the >>>>>> right >>>>>> approach is to modify the transaction subsystem. >>>>>> >>>>>> >>>>>> > Rado >>>>>> > >>>>>> > > Thanks, >>>>>> > > Seb >>>>>> > > >>>>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>>>> > > >>>>>> > > >>>>>> >>>>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>>>> infinispan.org> >>>>>> > > wrote: >>>>>> > >> >>>>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>>>> redhat.com> >>>>> >>>>> >>>>>> > wrote: >>>>>> > >> > Thanks for looking into it Sanne. Of course, we should add it >>>>>> (it can >>>>>> > be >>>>>> > >> > set >>>>>> > >> > to the same name as hostname since those are unique in >>>>>> Kubernetes). >>>>>> > >> > >>>>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >>>>>> > >> > >>>>>> > >> > Thanks again! >>>>>> > >> > Seb >>>>>> > >> >>>>>> > >> Thanks Sebastian! >>>>>> > >> >>>>>> > >> > >>>>>> >>>>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>>>> infinispan.org> >>>>> >>>>> >>>>>> > >> > wrote: >>>>>> > >> >> >>>>>> > >> >> Hi all, >>>>>> > >> >> >>>>>> > >> >> I've started to use the Infinispan Openshift Template and was >>>>>> > browsing >>>>>> > >> >> through the errors and warnings this produces. >>>>>> > >> >> >>>>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier property >>>>>> is set >>>>>> > >> >> to the default value. Please make sure it is unique." being >>>>>> produced >>>>>> > >> >> by the transaction system. >>>>>> > >> >> >>>>>> > >> >> The node id is usually not needed for developer's convenience >>>>>> and >>>>>> > >> >> assuming there's a single node in "dev mode", yet clearly the >>>>>> > >> >> Infinispan template is meant to work with multiple nodes >>>>>> running so >>>>>> > >> >> this warning seems concerning. >>>>>> > >> >> >>>>>> > >> >> I'm not sure what the impact is on the transaction manager so >>>>>> I asked >>>>>> > >> >> on the Narayana forums; Tom pointed me to some thourough >>>>>> design >>>>>> > >> >> documents and also suggested the EAP image does set the node >>>>>> > >> >> identifier: >>>>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>>>> > >> >> >>>>>> > >> >> WDYT? we probably want the Infinispan template to set this as >>>>>> well, >>>>>> > or >>>>>> > >> >> silence the warning? >>>>>> > >> >> >>>>>> > >> >> Thanks, >>>>>> > >> >> Sanne >>>>>> > >> >> _______________________________________________ >>>>>> > >> >> infinispan-dev mailing list >>>>>> >>>>> > >> >> infinispan-dev at lists.jboss.org >>>>> >>>>> >>>>>> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >> > >>>>>> > >> > >>>>>> > >> > _______________________________________________ >>>>>> > >> > infinispan-dev mailing list >>>>>> >>>>> > >> > infinispan-dev at lists.jboss.org >>>>> >>>>> >>>>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >> _______________________________________________ >>>>>> > >> infinispan-dev mailing list >>>>>> > >> infinispan-dev at lists.jboss.org >>>>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >>>>>> -------------- next part -------------- >>>>>> An HTML attachment was scrubbed... >>>>>> URL: >>>>>> http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180416/65962cf1/attachment-0001.html >>>>>> >>>>>> >>>>>> >>>>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180506/3da806f2/attachment-0001.html From dan.berindei at gmail.com Mon May 7 02:56:36 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 7 May 2018 09:56:36 +0300 Subject: [infinispan-dev] (no subject) In-Reply-To: References: Message-ID: Yes, JGroupsTransport wasn't failing x-site requests when the remote site was unreachable: ISPN-9113 [1] When I first started the migration away from MessageDispatcher I was hoping to use the bridge view from RELAY2 to detect unreachable sites, but then I realized only the site master sees the bridge view, so I went back to using the SITE_UNREACHABLE event... or so I thought. Cheers Dan [1]: https://issues.jboss.org/browse/ISPN-9113 On Tue, May 1, 2018 at 4:46 AM, Sebastian Laskawiec wrote: > Hey Galder, > > I haven't sent any email since I didn't have enough time to create a > proper reproducer or investigate what was going on. > > During the summit work, I switched from a custom build of 9.2.1.Final to > the latest master. This resulted in all sites going up and down. I was > struggling for 5 hours and I couldn't stabilize it. Then, 30 mins before > rehearsal session I decided to revert back to 9.2.1.Final. > > I wish I had more clues. Maybe I haven't done proper migration or used too > short timeouts for some FD* protocol. It's hard to say. > > Thanks, > Sebastian > > On Mon, Apr 30, 2018 at 5:16 PM Galder Zamarreno > wrote: > >> Ups, sent too early! So, the NYC site is not up, so I see in the logs: >> >> 2018-04-30 16:53:49,411 ERROR [org.infinispan.test.fwk.TEST_RELAY2] >> (testng-ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]:[]) >> ProtobufMetadataXSiteStateTransferTest[DIST_SYNC, tx=false]-NodeA-55452: >> no route to NYC: dropping message >> >> But the put hangs and never completes [2]. I've traced the code and [3] >> never gets called, with no events. >> >> I think this might be a JGroups bug because ChannelCallbacks >> implements UpHandler, but JChannel never deals with a receiver that might >> implement UpHandler, so it never delivers site unreachable message up the >> stack. >> >> @Bela? >> >> Cheers, >> Galder >> >> [2] https://gist.github.com/galderz/ada0e9317889eaa272845430b8d36ba1 >> [3] https://github.com/infinispan/infinispan/blob/ >> master/core/src/main/java/org/infinispan/remoting/transport/ >> jgroups/JGroupsTransport.java#L1366 >> [4] https://github.com/belaban/JGroups/blob/master/ >> src/org/jgroups/JChannel.java#L953-L983 >> >> >> >> On Mon, Apr 30, 2018 at 5:09 PM Galder Zamarreno >> wrote: >> >>> Hi Sebastian, >>> >>> Did you mention something about x-site not working on master? >>> >>> The reason I ask is cos I was trying to create a state transfer test for >>> [1] and there are some odds happening. >>> >>> In my test, I start LON site configured with NYC but NYC is not up yet. >>> >>> [1] https://issues.jboss.org/browse/ISPN-9111 >>> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180507/cf3f4566/attachment.html From brian.stansberry at redhat.com Mon May 7 17:08:14 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 7 May 2018 16:08:14 -0500 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: If it's not already set, WildFly sets the system property jboss.node.name at the very beginning of server boot, so ${jboss.node.name*:1*} would not resolve to 1. On Sun, May 6, 2018 at 6:10 PM, Sebastian Laskawiec wrote: > Ok, so how about doing the same thing you suggested, but just more > explicitly - adding node-identifier="${jboss.node.name*:1*}". This way > the bare metal deployment should be happy (since the default is still 1) > and we wouldn't need to override it in Infinispan. > > On Tue, May 1, 2018 at 10:09 AM Tom Jenkinson > wrote: > >> I am not sure - the default should be "1" for the bare metal case so the >> warning is reliably triggered but the default can be the pod name for >> OpenShift templates that only allow a single instance of the application >> server - does that help? >> >> The file you looked to want to edit is shared by bare metal and other >> deployment environments so it would be confusing to set the default to >> jboss.node.name there IMO. >> >> On 1 May 2018 at 03:39, Sebastian Laskawiec wrote: >> >>> Fair enough Tom. Thanks for explanation. >>> >>> One more request - would you guys be OK with me adding >>> a node-identifier="${jboss.node.name}" to the transaction subsystem >>> template [1]? This way we wouldn't need to copy it into Infinispan (since >>> we need to set it). >>> >>> [1] https://github.com/wildfly/wildfly/blob/master/ >>> transactions/src/main/resources/subsystem-templates/transactions.xml#L6 >>> >>> On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson >>> wrote: >>> >>>> On 18 April 2018 at 14:07, Sebastian Laskawiec >>>> wrote: >>>> >>>>> Hey Tom, >>>>> >>>>> Comments inlined. >>>>> >>>>> Thanks, >>>>> Sebastian >>>>> >>>>> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson < >>>>> tom.jenkinson at redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On 16 April 2018 at 09:31, <> wrote: >>>>>> >>>>>>> Adding +WildFly Dev to the loop >>>>>> >>>>>> >>>>>>> >>>>>>> Thanks for the explanation Rado. >>>>>>> >>>>>>> TL;DR: A while ago Sanne pointed out that we do not set >>>>>>> `node-identifier` >>>>>>> in transaction subsystem by default. The default value for the >>>>>>> `node-identifier` attribute it `1`. Not setting this attribute might >>>>>>> cause >>>>>>> problems in transaction recovery. Perhaps we could follow Rado's >>>>>>> idea and >>>>>>> set it to node name by default? >>>>>>> >>>>>> Indeed - it would cause serious data integrity problems if a >>>>>> non-unique node-identifier is used. >>>>>> >>>>>>> Some more comments inlined. >>>>>>> >>>>>>> Thanks, >>>>>>> Sebastian >>>>>>> >>>>>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>>>>> wrote: >>>>>>> >>>>>>> > Hi Sebastian, >>>>>>> > >>>>>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>>>>> > wrote: >>>>>>> > > Hey Rado, Paul, >>>>>>> > > >>>>>>> > > I started looking into this issue and it turned out that WF >>>>>>> subsystem >>>>>>> > > template doesn't provide `node-identifier` attribute [1]. >>>>>>> > >>>>>>> > I assume you mean that the default WildFly server profiles do not >>>>>>> >>>>>> > explicitly define the attribute. Right ? thus the value defaults in >>>>>> >>>>>> >>>>>>> > the model to "1" >>>>>>> > >>>>>>> > https://github.com/wildfly/wildfly/blob/master/ >>>>>>> transactions/src/main/java/org/jboss/as/txn/subsystem/ >>>>>>> TransactionSubsystemRootResourceDefinition.java#L97 >>>>>>> > which sole intention seems to be to log a warning on boot if the >>>>>>> value >>>>>>> > is unchanged. >>>>>>> > Why they decided on a constant that will be inherently not unique >>>>>>> as >>>>>>> > opposed to defaulting to the node name (which we already require >>>>>>> to be >>>>>>> > unique) as clustering node name or undertow instance-id does, is >>>>>>> > unclear to me. >>>>>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >>>>>>> > >>>>>>> >>>>>>> In OpenShift environment we could set it to `hostname`. This is >>>>>>> guaranteed >>>>>>> to be unique in whole OpenShift cluster. >>>>>>> >>>>>>> >>>>>>> We do this too in EAP images. >>>>>> To Rado's point, the default is "1" so we can print the warning to >>>>>> alert people they are misconfigured - it seems to be working :) >>>>>> >>>>> >>>>> This is the point. From my understanding, if we set it to node name >>>>> (instead of "1"), we could make it always work correctly. We could even >>>>> remove the code that emits the warning (since the node name needs to be >>>>> unique). >>>>> >>>>> To sum it up - if we decided to proceed this way, there would be no >>>>> requirement of setting the node-identifier at all. >>>>> >>>> >>>> For OpenShift you are right there is no requirement for someone to >>>> change the node-identifier from the podname and so that is why EAP images >>>> do that. >>>> >>>> For bare-metal it is different as there can be two servers on the same >>>> machine so they were configured to use the hostname as they node-identifier >>>> then if they were also connected to the same resource managers or the same >>>> object store they would interfere with each other. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> > >>>>>> >>>>>> >>>>>>> > > I'm not sure if you guys are the right people to ask, but is it >>>>>>> safe to >>>>>>> > > leave it set to default? Or shall I override our Infinispan >>>>>>> templates and >>>>>>> > > add this parameter (as I mentioned before, in OpenShift this I >>>>>>> wanted to >>>>>>> > set >>>>>>> > > it as Pod name trimmed to the last 23 chars since this is the >>>>>>> limit). >>>>>>> >>>>>> Putting a response to this in line - I am not certain who originally >>>>>> proposed this. >>>>>> >>>>>> You must use a globally unique node-identifier. If you are certain >>>>>> the last 23 characters guarantee that it would be valid - if there is a >>>>>> chance they are not unique it is not valid to trim. >>>>>> >>>>> >>>>> If that's not an issue, again, we could use the same limit as we have >>>>> for node name. >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> > >>>>>> >>>>>> >>>>>>> > It is not safe to leave it set to "1" as that results in >>>>>>> inconsistent >>>>>>> > processing of transaction recovery. >>>>>>> > IIUC we already set it to the node name for both EAP and JDG >>>>>>> > >>>>>>> > https://github.com/jboss-openshift/cct_module/blob/ >>>>>>> master/os-eap70-openshift/added/standalone-openshift.xml#L411 >>>>>>> > >>>>>>> > https://github.com/jboss-openshift/cct_module/blob/ >>>>>>> master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 >>>>>>> >>>>>> > which in turn defaults to the pod name ? so which profiles are we >>>>>> >>>>>> >>>>>>> > talking about here? >>>>>>> > >>>>>>> >>>>>>> Granted, we set it by default in CCT Modules. However in Infinispan >>>>>>> we just >>>>>>> grab provided transaction subsystem when rendering full >>>>>>> configuration from >>>>>>> featurepacks: >>>>>>> https://github.com/infinispan/infinispan/blob/master/server/ >>>>>>> integration/feature-pack/src/main/resources/configuration/ >>>>>>> standalone/subsystems-cloud.xml#L19 >>>>>>> >>>>>>> The default configuration XML doesn't contain the `node-identifier` >>>>>>> attribute. I can add it manually in the cloud.xml but I believe the >>>>>>> right >>>>>>> approach is to modify the transaction subsystem. >>>>>>> >>>>>>> >>>>>>> > Rado >>>>>>> > >>>>>>> > > Thanks, >>>>>>> > > Seb >>>>>>> > > >>>>>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>>>>> > > >>>>>>> > > >>>>>>> >>>>>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>>>>> infinispan.org> >>>>>>> > > wrote: >>>>>>> > >> >>>>>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>>>>> redhat.com> >>>>>> >>>>>> >>>>>>> > wrote: >>>>>>> > >> > Thanks for looking into it Sanne. Of course, we should add it >>>>>>> (it can >>>>>>> > be >>>>>>> > >> > set >>>>>>> > >> > to the same name as hostname since those are unique in >>>>>>> Kubernetes). >>>>>>> > >> > >>>>>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >>>>>>> > >> > >>>>>>> > >> > Thanks again! >>>>>>> > >> > Seb >>>>>>> > >> >>>>>>> > >> Thanks Sebastian! >>>>>>> > >> >>>>>>> > >> > >>>>>>> >>>>>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>>>>> infinispan.org> >>>>>> >>>>>> >>>>>>> > >> > wrote: >>>>>>> > >> >> >>>>>>> > >> >> Hi all, >>>>>>> > >> >> >>>>>>> > >> >> I've started to use the Infinispan Openshift Template and was >>>>>>> > browsing >>>>>>> > >> >> through the errors and warnings this produces. >>>>>>> > >> >> >>>>>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier >>>>>>> property is set >>>>>>> > >> >> to the default value. Please make sure it is unique." being >>>>>>> produced >>>>>>> > >> >> by the transaction system. >>>>>>> > >> >> >>>>>>> > >> >> The node id is usually not needed for developer's >>>>>>> convenience and >>>>>>> > >> >> assuming there's a single node in "dev mode", yet clearly the >>>>>>> > >> >> Infinispan template is meant to work with multiple nodes >>>>>>> running so >>>>>>> > >> >> this warning seems concerning. >>>>>>> > >> >> >>>>>>> > >> >> I'm not sure what the impact is on the transaction manager >>>>>>> so I asked >>>>>>> > >> >> on the Narayana forums; Tom pointed me to some thourough >>>>>>> design >>>>>>> > >> >> documents and also suggested the EAP image does set the node >>>>>>> > >> >> identifier: >>>>>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>>>>> > >> >> >>>>>>> > >> >> WDYT? we probably want the Infinispan template to set this >>>>>>> as well, >>>>>>> > or >>>>>>> > >> >> silence the warning? >>>>>>> > >> >> >>>>>>> > >> >> Thanks, >>>>>>> > >> >> Sanne >>>>>>> > >> >> _______________________________________________ >>>>>>> > >> >> infinispan-dev mailing list >>>>>>> >>>>>> > >> >> infinispan-dev at lists.jboss.org >>>>>> >>>>>> >>>>>>> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> > >> > >>>>>>> > >> > >>>>>>> > >> > _______________________________________________ >>>>>>> > >> > infinispan-dev mailing list >>>>>>> >>>>>> > >> > infinispan-dev at lists.jboss.org >>>>>> >>>>>> >>>>>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> > >> _______________________________________________ >>>>>>> > >> infinispan-dev mailing list >>>>>>> > >> infinispan-dev at lists.jboss.org >>>>>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> > >>>>>>> -------------- next part -------------- >>>>>>> An HTML attachment was scrubbed... >>>>>>> URL: http://lists.jboss.org/pipermail/wildfly-dev/ >>>>>>> attachments/20180416/65962cf1/attachment-0001.html >>>>>>> >>>>>>> >>>>>>> >>>>>>> >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180507/2a70ac8a/attachment-0001.html From galder at redhat.com Mon May 7 20:48:31 2018 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 08 May 2018 00:48:31 +0000 Subject: [infinispan-dev] Kubernetes simple demo failing with OpenShift 3.7.2 and latest FMP In-Reply-To: References: <9b8e78af-4c4d-3224-92ae-f09a3b5ca18c@redhat.com> Message-ID: Integrated! On Sun, 6 May 2018 at 15:47, Sebastian Laskawiec wrote: > Have you tried my latest PR, which has been waiting for the review for > over 2 months :D > > https://github.com/infinispan/infinispan-simple-tutorials/pull/42 > > On Fri, May 4, 2018 at 4:35 PM Galder Zamarreno wrote: > >> Still same error after applying that to developer >> >> On Fri, May 4, 2018 at 10:18 AM Radim Vansa wrote: >> >>> I think you can override this limitation using >>> >>> |oc adm policy add-scc-to-user anyuid developer | >>> >>> though it'd not be a recommended setting for production use... >>> >>> R. >>> >>> >>> >>> On 05/04/2018 09:45 AM, Galder Zamarreno wrote: >>> > Hey Sebastian, >>> > >>> > I've been checking with Clement and this might be due to OpenShift not >>> > allowing that base image for source builds. >>> > >>> > It seems like that to do S2I you need a base image with a certain user >>> > (I think that's 1001) and neither the java/ nor the fabric8/ ones do >>> > that. Clement mentioned redhat-openjdk-18/openjdk18-openshift images >>> > might do that but those I think are behind VPN or require some for of >>> > login. Clement also mentioned this might work with minishift, but I've >>> > not tried yet. >>> > >>> > The alternative might be to switch that example to use binary builds >>> > and adjust instructions for OpenShift and plain Kubernetes. >>> > >>> > Cheers, >>> > Galder >>> > >>> > On Thu, May 3, 2018 at 6:49 PM Galder Zamarreno >> > > wrote: >>> > >>> > Hey Sebastian, >>> > >>> > I'm trying to update simple tutorials to Infinispan 9.2.2.Final >>> > but Kubernetes demo does not seem to be working. >>> > >>> > I've started OpenShift 3.7.2 and have updated FMP to 3.5.33 and >>> > build fails. Error is: >>> > >>> > > error: build error: image "java:8-jre-alpine" must specify a >>> > user that is numeric and within the range of allowed users >>> > >>> > Cheers, >>> > Galder >>> > >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180508/59534aca/attachment.html From rory.odonnell at oracle.com Tue May 8 03:57:32 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 8 May 2018 08:57:32 +0100 Subject: [infinispan-dev] JDK 11 Early Access build 12 available Message-ID: <8c16f220-9b7b-7bb6-6503-e7280c589d42@oracle.com> ? Hi Galder, **JDK 11 EA build 12 , *****under both the GPL and Oracle EA licenses, is now available at **http://jdk.java.net/11**. ** * * Newly approved Schedule, status & features o http://openjdk.java.net/projects/jdk/11/ * Release Notes: o http://jdk.java.net/11/release-notes * Summary of changes o https://download.java.net/java/early_access/jdk11/12/jdk-11+12.html *Notable changes in JDK 11 EA builds since last email:* * Build 11 - see Release Notes for details. o JDK-8201315 : SelectableChannel.register may be invoked while a selection operation is in progress * Build 10 - see Release Notes for details. o JDK-8200149 : Removal of "com.sun.awt.AWTUtilities" class o JDK-8189997 (not public) :? Enhanced KeyStore Mechanisms o JDK-8175075 (not public) : 3DES Cipher Suites Disabled * Build 9: - see Release Notes for details. o JDK-8200152 : KerberosString uses UTF-8 encoding by default o JDK-8200458 : Readiness information previously recorded in SelectionKey ready set not preserved ** *Draft JEP: Deprecate pack200, unpack200 tools and related APIs. [1] * This draft JEP [2] proposes to deprecate the pack200 APIs and tools in the JDK. As outlined in the JEP, the usefulness of this technology have diminishing returns, the components using them are being removed and connectivity speeds have improved by leaps and bounds, since its inception.? Feedback appreciated via http://mail.openjdk.java.net/pipermail/jdk-dev Regards, Rory [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-April/001074.html [2] https://bugs.openjdk.java.net/browse/JDK-8200752 Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180508/bca6df36/attachment.html From brian.stansberry at redhat.com Tue May 8 13:45:37 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 8 May 2018 12:45:37 -0500 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: I might have missed something along the way, but if they are going to do scripting wouldn't they just set the attribute to ${jboss.node.name} and count on the fact that this is unique per pod? On Tue, May 8, 2018 at 3:28 AM, Tom Jenkinson wrote: > Thanks for confirming Brian. > > Perhaps we could set it to: > node-identifier="${jboss.tx.node.id:1}" > (a bit like https://github.com/jboss-developer/jboss-eap- > quickstarts/tree/7.1/jts) > > Sebastian could set -Djboss.tx.node.id during startup in a script? > > > > On 7 May 2018 at 22:08, Brian Stansberry > wrote: > >> If it's not already set, WildFly sets the system property jboss.node.name >> at the very beginning of server boot, so ${jboss.node.name*:1*} would >> not resolve to 1. >> >> On Sun, May 6, 2018 at 6:10 PM, Sebastian Laskawiec >> wrote: >> >>> Ok, so how about doing the same thing you suggested, but just more >>> explicitly - adding node-identifier="${jboss.node.name*:1*}". This way >>> the bare metal deployment should be happy (since the default is still 1) >>> and we wouldn't need to override it in Infinispan. >>> >>> On Tue, May 1, 2018 at 10:09 AM Tom Jenkinson >>> wrote: >>> >>>> I am not sure - the default should be "1" for the bare metal case so >>>> the warning is reliably triggered but the default can be the pod name for >>>> OpenShift templates that only allow a single instance of the application >>>> server - does that help? >>>> >>>> The file you looked to want to edit is shared by bare metal and other >>>> deployment environments so it would be confusing to set the default to >>>> jboss.node.name there IMO. >>>> >>>> On 1 May 2018 at 03:39, Sebastian Laskawiec >>>> wrote: >>>> >>>>> Fair enough Tom. Thanks for explanation. >>>>> >>>>> One more request - would you guys be OK with me adding >>>>> a node-identifier="${jboss.node.name}" to the transaction subsystem >>>>> template [1]? This way we wouldn't need to copy it into Infinispan (since >>>>> we need to set it). >>>>> >>>>> [1] https://github.com/wildfly/wildfly/blob/master/transacti >>>>> ons/src/main/resources/subsystem-templates/transactions.xml#L6 >>>>> >>>>> On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson < >>>>> tom.jenkinson at redhat.com> wrote: >>>>> >>>>>> On 18 April 2018 at 14:07, Sebastian Laskawiec >>>>>> wrote: >>>>>> >>>>>>> Hey Tom, >>>>>>> >>>>>>> Comments inlined. >>>>>>> >>>>>>> Thanks, >>>>>>> Sebastian >>>>>>> >>>>>>> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson < >>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 16 April 2018 at 09:31, <> wrote: >>>>>>>> >>>>>>>>> Adding +WildFly Dev to the loop >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks for the explanation Rado. >>>>>>>>> >>>>>>>>> TL;DR: A while ago Sanne pointed out that we do not set >>>>>>>>> `node-identifier` >>>>>>>>> in transaction subsystem by default. The default value for the >>>>>>>>> `node-identifier` attribute it `1`. Not setting this attribute >>>>>>>>> might cause >>>>>>>>> problems in transaction recovery. Perhaps we could follow Rado's >>>>>>>>> idea and >>>>>>>>> set it to node name by default? >>>>>>>>> >>>>>>>> Indeed - it would cause serious data integrity problems if a >>>>>>>> non-unique node-identifier is used. >>>>>>>> >>>>>>>>> Some more comments inlined. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Sebastian >>>>>>>>> >>>>>>>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>>>>>> redhat.com> wrote: >>>>>>>>> >>>>>>>>> > Hi Sebastian, >>>>>>>>> > >>>>>>>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>>>>>>> > wrote: >>>>>>>>> > > Hey Rado, Paul, >>>>>>>>> > > >>>>>>>>> > > I started looking into this issue and it turned out that WF >>>>>>>>> subsystem >>>>>>>>> > > template doesn't provide `node-identifier` attribute [1]. >>>>>>>>> > >>>>>>>>> > I assume you mean that the default WildFly server profiles do not >>>>>>>>> >>>>>>>> > explicitly define the attribute. Right ? thus the value defaults >>>>>>>>> in >>>>>>>> >>>>>>>> >>>>>>>>> > the model to "1" >>>>>>>>> > >>>>>>>>> > https://github.com/wildfly/wildfly/blob/master/transactions/ >>>>>>>>> src/main/java/org/jboss/as/txn/subsystem/TransactionSubsyste >>>>>>>>> mRootResourceDefinition.java#L97 >>>>>>>>> > which sole intention seems to be to log a warning on boot if the >>>>>>>>> value >>>>>>>>> > is unchanged. >>>>>>>>> > Why they decided on a constant that will be inherently not >>>>>>>>> unique as >>>>>>>>> > opposed to defaulting to the node name (which we already require >>>>>>>>> to be >>>>>>>>> > unique) as clustering node name or undertow instance-id does, is >>>>>>>>> > unclear to me. >>>>>>>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >>>>>>>>> > >>>>>>>>> >>>>>>>>> In OpenShift environment we could set it to `hostname`. This is >>>>>>>>> guaranteed >>>>>>>>> to be unique in whole OpenShift cluster. >>>>>>>>> >>>>>>>>> >>>>>>>>> We do this too in EAP images. >>>>>>>> To Rado's point, the default is "1" so we can print the warning to >>>>>>>> alert people they are misconfigured - it seems to be working :) >>>>>>>> >>>>>>> >>>>>>> This is the point. From my understanding, if we set it to node name >>>>>>> (instead of "1"), we could make it always work correctly. We could even >>>>>>> remove the code that emits the warning (since the node name needs to be >>>>>>> unique). >>>>>>> >>>>>>> To sum it up - if we decided to proceed this way, there would be no >>>>>>> requirement of setting the node-identifier at all. >>>>>>> >>>>>> >>>>>> For OpenShift you are right there is no requirement for someone to >>>>>> change the node-identifier from the podname and so that is why EAP images >>>>>> do that. >>>>>> >>>>>> For bare-metal it is different as there can be two servers on the >>>>>> same machine so they were configured to use the hostname as they >>>>>> node-identifier then if they were also connected to the same resource >>>>>> managers or the same object store they would interfere with each other. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>>> > > I'm not sure if you guys are the right people to ask, but is >>>>>>>>> it safe to >>>>>>>>> > > leave it set to default? Or shall I override our Infinispan >>>>>>>>> templates and >>>>>>>>> > > add this parameter (as I mentioned before, in OpenShift this I >>>>>>>>> wanted to >>>>>>>>> > set >>>>>>>>> > > it as Pod name trimmed to the last 23 chars since this is the >>>>>>>>> limit). >>>>>>>>> >>>>>>>> Putting a response to this in line - I am not certain who >>>>>>>> originally proposed this. >>>>>>>> >>>>>>>> You must use a globally unique node-identifier. If you are certain >>>>>>>> the last 23 characters guarantee that it would be valid - if there is a >>>>>>>> chance they are not unique it is not valid to trim. >>>>>>>> >>>>>>> >>>>>>> If that's not an issue, again, we could use the same limit as we >>>>>>> have for node name. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>>> > It is not safe to leave it set to "1" as that results in >>>>>>>>> inconsistent >>>>>>>>> > processing of transaction recovery. >>>>>>>>> > IIUC we already set it to the node name for both EAP and JDG >>>>>>>>> > >>>>>>>>> > https://github.com/jboss-openshift/cct_module/blob/master/os >>>>>>>>> -eap70-openshift/added/standalone-openshift.xml#L411 >>>>>>>>> > >>>>>>>>> > https://github.com/jboss-openshift/cct_module/blob/master/os >>>>>>>>> -jdg7-conffiles/added/clustered-openshift.xml#L282 >>>>>>>>> >>>>>>>> > which in turn defaults to the pod name ? so which profiles are we >>>>>>>> >>>>>>>> >>>>>>>>> > talking about here? >>>>>>>>> > >>>>>>>>> >>>>>>>>> Granted, we set it by default in CCT Modules. However in >>>>>>>>> Infinispan we just >>>>>>>>> grab provided transaction subsystem when rendering full >>>>>>>>> configuration from >>>>>>>>> featurepacks: >>>>>>>>> https://github.com/infinispan/infinispan/blob/master/server/ >>>>>>>>> integration/feature-pack/src/main/resources/configuration/st >>>>>>>>> andalone/subsystems-cloud.xml#L19 >>>>>>>>> >>>>>>>>> The default configuration XML doesn't contain the `node-identifier` >>>>>>>>> attribute. I can add it manually in the cloud.xml but I believe >>>>>>>>> the right >>>>>>>>> approach is to modify the transaction subsystem. >>>>>>>>> >>>>>>>>> >>>>>>>>> > Rado >>>>>>>>> > >>>>>>>>> > > Thanks, >>>>>>>>> > > Seb >>>>>>>>> > > >>>>>>>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> >>>>>>>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>>>>>>> infinispan.org> >>>>>>>>> > > wrote: >>>>>>>>> > >> >>>>>>>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>>>>>>> redhat.com> >>>>>>>> >>>>>>>> >>>>>>>>> > wrote: >>>>>>>>> > >> > Thanks for looking into it Sanne. Of course, we should add >>>>>>>>> it (it can >>>>>>>>> > be >>>>>>>>> > >> > set >>>>>>>>> > >> > to the same name as hostname since those are unique in >>>>>>>>> Kubernetes). >>>>>>>>> > >> > >>>>>>>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >>>>>>>>> > >> > >>>>>>>>> > >> > Thanks again! >>>>>>>>> > >> > Seb >>>>>>>>> > >> >>>>>>>>> > >> Thanks Sebastian! >>>>>>>>> > >> >>>>>>>>> > >> > >>>>>>>>> >>>>>>>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>>>>>>> infinispan.org> >>>>>>>> >>>>>>>> >>>>>>>>> > >> > wrote: >>>>>>>>> > >> >> >>>>>>>>> > >> >> Hi all, >>>>>>>>> > >> >> >>>>>>>>> > >> >> I've started to use the Infinispan Openshift Template and >>>>>>>>> was >>>>>>>>> > browsing >>>>>>>>> > >> >> through the errors and warnings this produces. >>>>>>>>> > >> >> >>>>>>>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier >>>>>>>>> property is set >>>>>>>>> > >> >> to the default value. Please make sure it is unique." >>>>>>>>> being produced >>>>>>>>> > >> >> by the transaction system. >>>>>>>>> > >> >> >>>>>>>>> > >> >> The node id is usually not needed for developer's >>>>>>>>> convenience and >>>>>>>>> > >> >> assuming there's a single node in "dev mode", yet clearly >>>>>>>>> the >>>>>>>>> > >> >> Infinispan template is meant to work with multiple nodes >>>>>>>>> running so >>>>>>>>> > >> >> this warning seems concerning. >>>>>>>>> > >> >> >>>>>>>>> > >> >> I'm not sure what the impact is on the transaction manager >>>>>>>>> so I asked >>>>>>>>> > >> >> on the Narayana forums; Tom pointed me to some thourough >>>>>>>>> design >>>>>>>>> > >> >> documents and also suggested the EAP image does set the >>>>>>>>> node >>>>>>>>> > >> >> identifier: >>>>>>>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>>>>>>> > >> >> >>>>>>>>> > >> >> WDYT? we probably want the Infinispan template to set this >>>>>>>>> as well, >>>>>>>>> > or >>>>>>>>> > >> >> silence the warning? >>>>>>>>> > >> >> >>>>>>>>> > >> >> Thanks, >>>>>>>>> > >> >> Sanne >>>>>>>>> > >> >> _______________________________________________ >>>>>>>>> > >> >> infinispan-dev mailing list >>>>>>>>> >>>>>>>> > >> >> infinispan-dev at lists.jboss.org >>>>>>>> >>>>>>>> >>>>>>>>> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> > >> > >>>>>>>>> > >> > >>>>>>>>> > >> > _______________________________________________ >>>>>>>>> > >> > infinispan-dev mailing list >>>>>>>>> >>>>>>>> > >> > infinispan-dev at lists.jboss.org >>>>>>>> >>>>>>>> >>>>>>>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> > >> _______________________________________________ >>>>>>>>> > >> infinispan-dev mailing list >>>>>>>>> > >> infinispan-dev at lists.jboss.org >>>>>>>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> > >>>>>>>>> -------------- next part -------------- >>>>>>>>> An HTML attachment was scrubbed... >>>>>>>>> URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/201 >>>>>>>>> 80416/65962cf1/attachment-0001.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180508/fff97469/attachment-0001.html From brian.stansberry at redhat.com Tue May 8 18:05:31 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 8 May 2018 17:05:31 -0500 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: Ah, ok. I was thinking of scripting in the broad sense the various stuff that goes into creating images. In any case, I don't see any downside to having node-identifier="${jboss.tx. node.id:1}" in the standard WF config files. On Tue, May 8, 2018 at 3:07 PM, Tom Jenkinson wrote: > I think they want to avoid changing the standalone.xml file and just want > to control it from their startup script. > > On 8 May 2018 at 18:45, Brian Stansberry > wrote: > >> I might have missed something along the way, but if they are going to do >> scripting wouldn't they just set the attribute to ${jboss.node.name} and >> count on the fact that this is unique per pod? >> >> On Tue, May 8, 2018 at 3:28 AM, Tom Jenkinson >> wrote: >> >>> Thanks for confirming Brian. >>> >>> Perhaps we could set it to: >>> node-identifier="${jboss.tx.node.id:1}" >>> (a bit like https://github.com/jboss-developer/jboss-eap-quickstart >>> s/tree/7.1/jts) >>> >>> Sebastian could set -Djboss.tx.node.id during startup in a script? >>> >>> >>> >>> On 7 May 2018 at 22:08, Brian Stansberry >>> wrote: >>> >>>> If it's not already set, WildFly sets the system property >>>> jboss.node.name at the very beginning of server boot, so ${ >>>> jboss.node.name*:1*} would not resolve to 1. >>>> >>>> On Sun, May 6, 2018 at 6:10 PM, Sebastian Laskawiec < >>>> slaskawi at redhat.com> wrote: >>>> >>>>> Ok, so how about doing the same thing you suggested, but just more >>>>> explicitly - adding node-identifier="${jboss.node.name*:1*}". This >>>>> way the bare metal deployment should be happy (since the default is still >>>>> 1) and we wouldn't need to override it in Infinispan. >>>>> >>>>> On Tue, May 1, 2018 at 10:09 AM Tom Jenkinson < >>>>> tom.jenkinson at redhat.com> wrote: >>>>> >>>>>> I am not sure - the default should be "1" for the bare metal case so >>>>>> the warning is reliably triggered but the default can be the pod name for >>>>>> OpenShift templates that only allow a single instance of the application >>>>>> server - does that help? >>>>>> >>>>>> The file you looked to want to edit is shared by bare metal and other >>>>>> deployment environments so it would be confusing to set the default to >>>>>> jboss.node.name there IMO. >>>>>> >>>>>> On 1 May 2018 at 03:39, Sebastian Laskawiec >>>>>> wrote: >>>>>> >>>>>>> Fair enough Tom. Thanks for explanation. >>>>>>> >>>>>>> One more request - would you guys be OK with me adding >>>>>>> a node-identifier="${jboss.node.name}" to the transaction subsystem >>>>>>> template [1]? This way we wouldn't need to copy it into Infinispan (since >>>>>>> we need to set it). >>>>>>> >>>>>>> [1] https://github.com/wildfly/wildfly/blob/master/transacti >>>>>>> ons/src/main/resources/subsystem-templates/transactions.xml#L6 >>>>>>> >>>>>>> On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson < >>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>> >>>>>>>> On 18 April 2018 at 14:07, Sebastian Laskawiec >>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hey Tom, >>>>>>>>> >>>>>>>>> Comments inlined. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Sebastian >>>>>>>>> >>>>>>>>> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson < >>>>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 16 April 2018 at 09:31, <> wrote: >>>>>>>>>> >>>>>>>>>>> Adding +WildFly Dev to the loop >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks for the explanation Rado. >>>>>>>>>>> >>>>>>>>>>> TL;DR: A while ago Sanne pointed out that we do not set >>>>>>>>>>> `node-identifier` >>>>>>>>>>> in transaction subsystem by default. The default value for the >>>>>>>>>>> `node-identifier` attribute it `1`. Not setting this attribute >>>>>>>>>>> might cause >>>>>>>>>>> problems in transaction recovery. Perhaps we could follow Rado's >>>>>>>>>>> idea and >>>>>>>>>>> set it to node name by default? >>>>>>>>>>> >>>>>>>>>> Indeed - it would cause serious data integrity problems if a >>>>>>>>>> non-unique node-identifier is used. >>>>>>>>>> >>>>>>>>>>> Some more comments inlined. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Sebastian >>>>>>>>>>> >>>>>>>>>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>>>>>>>> redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>> > Hi Sebastian, >>>>>>>>>>> > >>>>>>>>>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>>>>>>>>> > wrote: >>>>>>>>>>> > > Hey Rado, Paul, >>>>>>>>>>> > > >>>>>>>>>>> > > I started looking into this issue and it turned out that WF >>>>>>>>>>> subsystem >>>>>>>>>>> > > template doesn't provide `node-identifier` attribute [1]. >>>>>>>>>>> > >>>>>>>>>>> > I assume you mean that the default WildFly server profiles do >>>>>>>>>>> not >>>>>>>>>>> >>>>>>>>>> > explicitly define the attribute. Right ? thus the value >>>>>>>>>>> defaults in >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > the model to "1" >>>>>>>>>>> > >>>>>>>>>>> > https://github.com/wildfly/wildfly/blob/master/transactions/ >>>>>>>>>>> src/main/java/org/jboss/as/txn/subsystem/TransactionSubsyste >>>>>>>>>>> mRootResourceDefinition.java#L97 >>>>>>>>>>> > which sole intention seems to be to log a warning on boot if >>>>>>>>>>> the value >>>>>>>>>>> > is unchanged. >>>>>>>>>>> > Why they decided on a constant that will be inherently not >>>>>>>>>>> unique as >>>>>>>>>>> > opposed to defaulting to the node name (which we already >>>>>>>>>>> require to be >>>>>>>>>>> > unique) as clustering node name or undertow instance-id does, >>>>>>>>>>> is >>>>>>>>>>> > unclear to me. >>>>>>>>>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119. >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> In OpenShift environment we could set it to `hostname`. This is >>>>>>>>>>> guaranteed >>>>>>>>>>> to be unique in whole OpenShift cluster. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> We do this too in EAP images. >>>>>>>>>> To Rado's point, the default is "1" so we can print the warning >>>>>>>>>> to alert people they are misconfigured - it seems to be working :) >>>>>>>>>> >>>>>>>>> >>>>>>>>> This is the point. From my understanding, if we set it to node >>>>>>>>> name (instead of "1"), we could make it always work correctly. We could >>>>>>>>> even remove the code that emits the warning (since the node name needs to >>>>>>>>> be unique). >>>>>>>>> >>>>>>>>> To sum it up - if we decided to proceed this way, there would be >>>>>>>>> no requirement of setting the node-identifier at all. >>>>>>>>> >>>>>>>> >>>>>>>> For OpenShift you are right there is no requirement for someone to >>>>>>>> change the node-identifier from the podname and so that is why EAP images >>>>>>>> do that. >>>>>>>> >>>>>>>> For bare-metal it is different as there can be two servers on the >>>>>>>> same machine so they were configured to use the hostname as they >>>>>>>> node-identifier then if they were also connected to the same resource >>>>>>>> managers or the same object store they would interfere with each other. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > > I'm not sure if you guys are the right people to ask, but is >>>>>>>>>>> it safe to >>>>>>>>>>> > > leave it set to default? Or shall I override our Infinispan >>>>>>>>>>> templates and >>>>>>>>>>> > > add this parameter (as I mentioned before, in OpenShift this >>>>>>>>>>> I wanted to >>>>>>>>>>> > set >>>>>>>>>>> > > it as Pod name trimmed to the last 23 chars since this is >>>>>>>>>>> the limit). >>>>>>>>>>> >>>>>>>>>> Putting a response to this in line - I am not certain who >>>>>>>>>> originally proposed this. >>>>>>>>>> >>>>>>>>>> You must use a globally unique node-identifier. If you are >>>>>>>>>> certain the last 23 characters guarantee that it would be valid - if there >>>>>>>>>> is a chance they are not unique it is not valid to trim. >>>>>>>>>> >>>>>>>>> >>>>>>>>> If that's not an issue, again, we could use the same limit as we >>>>>>>>> have for node name. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > It is not safe to leave it set to "1" as that results in >>>>>>>>>>> inconsistent >>>>>>>>>>> > processing of transaction recovery. >>>>>>>>>>> > IIUC we already set it to the node name for both EAP and JDG >>>>>>>>>>> > >>>>>>>>>>> > https://github.com/jboss-openshift/cct_module/blob/master/os >>>>>>>>>>> -eap70-openshift/added/standalone-openshift.xml#L411 >>>>>>>>>>> > >>>>>>>>>>> > https://github.com/jboss-openshift/cct_module/blob/master/os >>>>>>>>>>> -jdg7-conffiles/added/clustered-openshift.xml#L282 >>>>>>>>>>> >>>>>>>>>> > which in turn defaults to the pod name ? so which profiles are >>>>>>>>>>> we >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > talking about here? >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> Granted, we set it by default in CCT Modules. However in >>>>>>>>>>> Infinispan we just >>>>>>>>>>> grab provided transaction subsystem when rendering full >>>>>>>>>>> configuration from >>>>>>>>>>> featurepacks: >>>>>>>>>>> https://github.com/infinispan/infinispan/blob/master/server/ >>>>>>>>>>> integration/feature-pack/src/main/resources/configuration/st >>>>>>>>>>> andalone/subsystems-cloud.xml#L19 >>>>>>>>>>> >>>>>>>>>>> The default configuration XML doesn't contain the >>>>>>>>>>> `node-identifier` >>>>>>>>>>> attribute. I can add it manually in the cloud.xml but I believe >>>>>>>>>>> the right >>>>>>>>>>> approach is to modify the transaction subsystem. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> > Rado >>>>>>>>>>> > >>>>>>>>>>> > > Thanks, >>>>>>>>>>> > > Seb >>>>>>>>>>> > > >>>>>>>>>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>>>>>>>>> > > >>>>>>>>>>> > > >>>>>>>>>>> >>>>>>>>>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>>>>>>>>> infinispan.org> >>>>>>>>>>> > > wrote: >>>>>>>>>>> > >> >>>>>>>>>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>>>>>>>>> redhat.com> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > wrote: >>>>>>>>>>> > >> > Thanks for looking into it Sanne. Of course, we should >>>>>>>>>>> add it (it can >>>>>>>>>>> > be >>>>>>>>>>> > >> > set >>>>>>>>>>> > >> > to the same name as hostname since those are unique in >>>>>>>>>>> Kubernetes). >>>>>>>>>>> > >> > >>>>>>>>>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for it. >>>>>>>>>>> > >> > >>>>>>>>>>> > >> > Thanks again! >>>>>>>>>>> > >> > Seb >>>>>>>>>>> > >> >>>>>>>>>>> > >> Thanks Sebastian! >>>>>>>>>>> > >> >>>>>>>>>>> > >> > >>>>>>>>>>> >>>>>>>>>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>>>>>>>>> infinispan.org> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >> > wrote: >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> Hi all, >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> I've started to use the Infinispan Openshift Template >>>>>>>>>>> and was >>>>>>>>>>> > browsing >>>>>>>>>>> > >> >> through the errors and warnings this produces. >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier >>>>>>>>>>> property is set >>>>>>>>>>> > >> >> to the default value. Please make sure it is unique." >>>>>>>>>>> being produced >>>>>>>>>>> > >> >> by the transaction system. >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> The node id is usually not needed for developer's >>>>>>>>>>> convenience and >>>>>>>>>>> > >> >> assuming there's a single node in "dev mode", yet >>>>>>>>>>> clearly the >>>>>>>>>>> > >> >> Infinispan template is meant to work with multiple nodes >>>>>>>>>>> running so >>>>>>>>>>> > >> >> this warning seems concerning. >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> I'm not sure what the impact is on the transaction >>>>>>>>>>> manager so I asked >>>>>>>>>>> > >> >> on the Narayana forums; Tom pointed me to some thourough >>>>>>>>>>> design >>>>>>>>>>> > >> >> documents and also suggested the EAP image does set the >>>>>>>>>>> node >>>>>>>>>>> > >> >> identifier: >>>>>>>>>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> WDYT? we probably want the Infinispan template to set >>>>>>>>>>> this as well, >>>>>>>>>>> > or >>>>>>>>>>> > >> >> silence the warning? >>>>>>>>>>> > >> >> >>>>>>>>>>> > >> >> Thanks, >>>>>>>>>>> > >> >> Sanne >>>>>>>>>>> > >> >> _______________________________________________ >>>>>>>>>>> > >> >> infinispan-dev mailing list >>>>>>>>>>> >>>>>>>>>> > >> >> infinispan-dev at lists.jboss.org >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> > >> > >>>>>>>>>>> > >> > >>>>>>>>>>> > >> > _______________________________________________ >>>>>>>>>>> > >> > infinispan-dev mailing list >>>>>>>>>>> >>>>>>>>>> > >> > infinispan-dev at lists.jboss.org >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> > >> _______________________________________________ >>>>>>>>>>> > >> infinispan-dev mailing list >>>>>>>>>>> > >> infinispan-dev at lists.jboss.org >>>>>>>>>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> > >>>>>>>>>>> -------------- next part -------------- >>>>>>>>>>> An HTML attachment was scrubbed... >>>>>>>>>>> URL: http://lists.jboss.org/piperma >>>>>>>>>>> il/wildfly-dev/attachments/20180416/65962cf1/attachment-0001 >>>>>>>>>>> .html >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Manager, Senior Principal Software Engineer >>>> Red Hat >>>> >>> >>> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180508/75fa694e/attachment-0001.html From slaskawi at redhat.com Wed May 9 21:56:41 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 10 May 2018 01:56:41 +0000 Subject: [infinispan-dev] [wildfly-dev] WFLYTX0013 in the Infinispan Openshift Template In-Reply-To: References: Message-ID: I'm sorry for delay... I got sucked by the Summit prep activities. Yes, to all, what you said! Shall I create a JIRA for you? On Wed, May 9, 2018 at 9:39 AM Tom Jenkinson wrote: > Thanks Brian. Does it work for you Sebastian? > > On 8 May 2018 at 23:05, Brian Stansberry > wrote: > >> Ah, ok. I was thinking of scripting in the broad sense the various stuff >> that goes into creating images. >> >> In any case, I don't see any downside to having node-identifier="${ >> jboss.tx.node.id:1}" in the standard WF config files. >> >> >> >> On Tue, May 8, 2018 at 3:07 PM, Tom Jenkinson >> wrote: >> >>> I think they want to avoid changing the standalone.xml file and just >>> want to control it from their startup script. >>> >>> On 8 May 2018 at 18:45, Brian Stansberry >>> wrote: >>> >>>> I might have missed something along the way, but if they are going to >>>> do scripting wouldn't they just set the attribute to ${jboss.node.name} >>>> and count on the fact that this is unique per pod? >>>> >>>> On Tue, May 8, 2018 at 3:28 AM, Tom Jenkinson >>> > wrote: >>>> >>>>> Thanks for confirming Brian. >>>>> >>>>> Perhaps we could set it to: >>>>> node-identifier="${jboss.tx.node.id:1}" >>>>> (a bit like >>>>> https://github.com/jboss-developer/jboss-eap-quickstarts/tree/7.1/jts) >>>>> >>>>> Sebastian could set -Djboss.tx.node.id during startup in a script? >>>>> >>>>> >>>>> >>>>> On 7 May 2018 at 22:08, Brian Stansberry >>>>> wrote: >>>>> >>>>>> If it's not already set, WildFly sets the system property >>>>>> jboss.node.name at the very beginning of server boot, so ${ >>>>>> jboss.node.name*:1*} would not resolve to 1. >>>>>> >>>>>> On Sun, May 6, 2018 at 6:10 PM, Sebastian Laskawiec < >>>>>> slaskawi at redhat.com> wrote: >>>>>> >>>>>>> Ok, so how about doing the same thing you suggested, but just more >>>>>>> explicitly - adding node-identifier="${jboss.node.name*:1*}". This >>>>>>> way the bare metal deployment should be happy (since the default is still >>>>>>> 1) and we wouldn't need to override it in Infinispan. >>>>>>> >>>>>>> On Tue, May 1, 2018 at 10:09 AM Tom Jenkinson < >>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>> >>>>>>>> I am not sure - the default should be "1" for the bare metal case >>>>>>>> so the warning is reliably triggered but the default can be the pod name >>>>>>>> for OpenShift templates that only allow a single instance of the >>>>>>>> application server - does that help? >>>>>>>> >>>>>>>> The file you looked to want to edit is shared by bare metal and >>>>>>>> other deployment environments so it would be confusing to set the default >>>>>>>> to jboss.node.name there IMO. >>>>>>>> >>>>>>>> On 1 May 2018 at 03:39, Sebastian Laskawiec >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Fair enough Tom. Thanks for explanation. >>>>>>>>> >>>>>>>>> One more request - would you guys be OK with me adding >>>>>>>>> a node-identifier="${jboss.node.name}" to the transaction >>>>>>>>> subsystem template [1]? This way we wouldn't need to copy it into >>>>>>>>> Infinispan (since we need to set it). >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/resources/subsystem-templates/transactions.xml#L6 >>>>>>>>> >>>>>>>>> On Wed, Apr 18, 2018 at 3:37 PM Tom Jenkinson < >>>>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>>>> >>>>>>>>>> On 18 April 2018 at 14:07, Sebastian Laskawiec < >>>>>>>>>> slaskawi at redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hey Tom, >>>>>>>>>>> >>>>>>>>>>> Comments inlined. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Sebastian >>>>>>>>>>> >>>>>>>>>>> On Tue, Apr 17, 2018 at 4:37 PM Tom Jenkinson < >>>>>>>>>>> tom.jenkinson at redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 16 April 2018 at 09:31, <> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Adding +WildFly Dev to the >>>>>>>>>>>>> loop >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for the explanation Rado. >>>>>>>>>>>>> >>>>>>>>>>>>> TL;DR: A while ago Sanne pointed out that we do not set >>>>>>>>>>>>> `node-identifier` >>>>>>>>>>>>> in transaction subsystem by default. The default value for the >>>>>>>>>>>>> `node-identifier` attribute it `1`. Not setting this attribute >>>>>>>>>>>>> might cause >>>>>>>>>>>>> problems in transaction recovery. Perhaps we could follow >>>>>>>>>>>>> Rado's idea and >>>>>>>>>>>>> set it to node name by default? >>>>>>>>>>>>> >>>>>>>>>>>> Indeed - it would cause serious data integrity problems if a >>>>>>>>>>>> non-unique node-identifier is used. >>>>>>>>>>>> >>>>>>>>>>>>> Some more comments inlined. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Sebastian >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Apr 13, 2018 at 7:07 PM Radoslav Husar >>>>>>>>>>>> redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> > Hi Sebastian, >>>>>>>>>>>>> > >>>>>>>>>>>>> > On Wed, Apr 11, 2018 at 2:31 PM, Sebastian Laskawiec >>>>>>>>>>>>> > wrote: >>>>>>>>>>>>> > > Hey Rado, Paul, >>>>>>>>>>>>> > > >>>>>>>>>>>>> > > I started looking into this issue and it turned out that >>>>>>>>>>>>> WF subsystem >>>>>>>>>>>>> > > template doesn't provide `node-identifier` attribute [1]. >>>>>>>>>>>>> > >>>>>>>>>>>>> > I assume you mean that the default WildFly server profiles >>>>>>>>>>>>> do not >>>>>>>>>>>>> >>>>>>>>>>>> > explicitly define the attribute. Right ? thus the value >>>>>>>>>>>>> defaults in >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > the model to "1" >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> https://github.com/wildfly/wildfly/blob/master/transactions/src/main/java/org/jboss/as/txn/subsystem/TransactionSubsystemRootResourceDefinition.java#L97 >>>>>>>>>>>>> > which sole intention seems to be to log a warning on boot if >>>>>>>>>>>>> the value >>>>>>>>>>>>> > is unchanged. >>>>>>>>>>>>> > Why they decided on a constant that will be inherently not >>>>>>>>>>>>> unique as >>>>>>>>>>>>> > opposed to defaulting to the node name (which we already >>>>>>>>>>>>> require to be >>>>>>>>>>>>> > unique) as clustering node name or undertow instance-id >>>>>>>>>>>>> does, is >>>>>>>>>>>>> > unclear to me. >>>>>>>>>>>>> > Some context is on https://issues.jboss.org/browse/WFLY-1119 >>>>>>>>>>>>> . >>>>>>>>>>>>> > >>>>>>>>>>>>> >>>>>>>>>>>>> In OpenShift environment we could set it to `hostname`. This >>>>>>>>>>>>> is guaranteed >>>>>>>>>>>>> to be unique in whole OpenShift cluster. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> We do this too in EAP images. >>>>>>>>>>>> To Rado's point, the default is "1" so we can print the warning >>>>>>>>>>>> to alert people they are misconfigured - it seems to be working :) >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> This is the point. From my understanding, if we set it to node >>>>>>>>>>> name (instead of "1"), we could make it always work correctly. We could >>>>>>>>>>> even remove the code that emits the warning (since the node name needs to >>>>>>>>>>> be unique). >>>>>>>>>>> >>>>>>>>>>> To sum it up - if we decided to proceed this way, there would be >>>>>>>>>>> no requirement of setting the node-identifier at all. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> For OpenShift you are right there is no requirement for someone >>>>>>>>>> to change the node-identifier from the podname and so that is why EAP >>>>>>>>>> images do that. >>>>>>>>>> >>>>>>>>>> For bare-metal it is different as there can be two servers on the >>>>>>>>>> same machine so they were configured to use the hostname as they >>>>>>>>>> node-identifier then if they were also connected to the same resource >>>>>>>>>> managers or the same object store they would interfere with each other. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > > I'm not sure if you guys are the right people to ask, but >>>>>>>>>>>>> is it safe to >>>>>>>>>>>>> > > leave it set to default? Or shall I override our >>>>>>>>>>>>> Infinispan templates and >>>>>>>>>>>>> > > add this parameter (as I mentioned before, in OpenShift >>>>>>>>>>>>> this I wanted to >>>>>>>>>>>>> > set >>>>>>>>>>>>> > > it as Pod name trimmed to the last 23 chars since this is >>>>>>>>>>>>> the limit). >>>>>>>>>>>>> >>>>>>>>>>>> Putting a response to this in line - I am not certain who >>>>>>>>>>>> originally proposed this. >>>>>>>>>>>> >>>>>>>>>>>> You must use a globally unique node-identifier. If you are >>>>>>>>>>>> certain the last 23 characters guarantee that it would be valid - if there >>>>>>>>>>>> is a chance they are not unique it is not valid to trim. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> If that's not an issue, again, we could use the same limit as we >>>>>>>>>>> have for node name. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > It is not safe to leave it set to "1" as that results in >>>>>>>>>>>>> inconsistent >>>>>>>>>>>>> > processing of transaction recovery. >>>>>>>>>>>>> > IIUC we already set it to the node name for both EAP and JDG >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> https://github.com/jboss-openshift/cct_module/blob/master/os-eap70-openshift/added/standalone-openshift.xml#L411 >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> https://github.com/jboss-openshift/cct_module/blob/master/os-jdg7-conffiles/added/clustered-openshift.xml#L282 >>>>>>>>>>>>> >>>>>>>>>>>> > which in turn defaults to the pod name ? so which profiles >>>>>>>>>>>>> are we >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > talking about here? >>>>>>>>>>>>> > >>>>>>>>>>>>> >>>>>>>>>>>>> Granted, we set it by default in CCT Modules. However in >>>>>>>>>>>>> Infinispan we just >>>>>>>>>>>>> grab provided transaction subsystem when rendering full >>>>>>>>>>>>> configuration from >>>>>>>>>>>>> featurepacks: >>>>>>>>>>>>> >>>>>>>>>>>>> https://github.com/infinispan/infinispan/blob/master/server/integration/feature-pack/src/main/resources/configuration/standalone/subsystems-cloud.xml#L19 >>>>>>>>>>>>> >>>>>>>>>>>>> The default configuration XML doesn't contain the >>>>>>>>>>>>> `node-identifier` >>>>>>>>>>>>> attribute. I can add it manually in the cloud.xml but I >>>>>>>>>>>>> believe the right >>>>>>>>>>>>> approach is to modify the transaction subsystem. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> > Rado >>>>>>>>>>>>> > >>>>>>>>>>>>> > > Thanks, >>>>>>>>>>>>> > > Seb >>>>>>>>>>>>> > > >>>>>>>>>>>>> > > [1] usually set to node-identifier="${jboss.node.name}" >>>>>>>>>>>>> > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> >>>>>>>>>>>> > > On Mon, Apr 9, 2018 at 10:39 AM Sanne Grinovero >>>>>>>>>>>> infinispan.org> >>>>>>>>>>>>> > > wrote: >>>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> On 9 April 2018 at 09:26, Sebastian Laskawiec >>>>>>>>>>>> at redhat.com> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > wrote: >>>>>>>>>>>>> > >> > Thanks for looking into it Sanne. Of course, we should >>>>>>>>>>>>> add it (it can >>>>>>>>>>>>> > be >>>>>>>>>>>>> > >> > set >>>>>>>>>>>>> > >> > to the same name as hostname since those are unique in >>>>>>>>>>>>> Kubernetes). >>>>>>>>>>>>> > >> > >>>>>>>>>>>>> > >> > Created https://issues.jboss.org/browse/ISPN-9051 for >>>>>>>>>>>>> it. >>>>>>>>>>>>> > >> > >>>>>>>>>>>>> > >> > Thanks again! >>>>>>>>>>>>> > >> > Seb >>>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> Thanks Sebastian! >>>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> > >>>>>>>>>>>>> >>>>>>>>>>>> > >> > On Fri, Apr 6, 2018 at 8:53 PM Sanne Grinovero >>>>>>>>>>>> infinispan.org> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > >> > wrote: >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> Hi all, >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> I've started to use the Infinispan Openshift Template >>>>>>>>>>>>> and was >>>>>>>>>>>>> > browsing >>>>>>>>>>>>> > >> >> through the errors and warnings this produces. >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> In particular I noticed "WFLYTX0013: Node identifier >>>>>>>>>>>>> property is set >>>>>>>>>>>>> > >> >> to the default value. Please make sure it is unique." >>>>>>>>>>>>> being produced >>>>>>>>>>>>> > >> >> by the transaction system. >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> The node id is usually not needed for developer's >>>>>>>>>>>>> convenience and >>>>>>>>>>>>> > >> >> assuming there's a single node in "dev mode", yet >>>>>>>>>>>>> clearly the >>>>>>>>>>>>> > >> >> Infinispan template is meant to work with multiple >>>>>>>>>>>>> nodes running so >>>>>>>>>>>>> > >> >> this warning seems concerning. >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> I'm not sure what the impact is on the transaction >>>>>>>>>>>>> manager so I asked >>>>>>>>>>>>> > >> >> on the Narayana forums; Tom pointed me to some >>>>>>>>>>>>> thourough design >>>>>>>>>>>>> > >> >> documents and also suggested the EAP image does set >>>>>>>>>>>>> the node >>>>>>>>>>>>> > >> >> identifier: >>>>>>>>>>>>> > >> >> - https://developer.jboss.org/message/981702#981702 >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> WDYT? we probably want the Infinispan template to set >>>>>>>>>>>>> this as well, >>>>>>>>>>>>> > or >>>>>>>>>>>>> > >> >> silence the warning? >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> > >> >> Thanks, >>>>>>>>>>>>> > >> >> Sanne >>>>>>>>>>>>> > >> >> _______________________________________________ >>>>>>>>>>>>> > >> >> infinispan-dev mailing list >>>>>>>>>>>>> >>>>>>>>>>>> > >> >> infinispan-dev at lists.jboss.org >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > >> >> >>>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>> > >> > >>>>>>>>>>>>> > >> > >>>>>>>>>>>>> > >> > _______________________________________________ >>>>>>>>>>>>> > >> > infinispan-dev mailing list >>>>>>>>>>>>> >>>>>>>>>>>> > >> > infinispan-dev at lists.jboss.org >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>> > >> _______________________________________________ >>>>>>>>>>>>> > >> infinispan-dev mailing list >>>>>>>>>>>>> > >> infinispan-dev at lists.jboss.org >>>>>>>>>>>>> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>> > >>>>>>>>>>>>> -------------- next part -------------- >>>>>>>>>>>>> An HTML attachment was scrubbed... >>>>>>>>>>>>> URL: >>>>>>>>>>>>> http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180416/65962cf1/attachment-0001.html >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Brian Stansberry >>>>>> Manager, Senior Principal Software Engineer >>>>>> Red Hat >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Manager, Senior Principal Software Engineer >>>> Red Hat >>>> >>> >>> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180510/e572c406/attachment-0001.html From slaskawi at redhat.com Mon May 14 09:32:34 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 14 May 2018 15:32:34 +0200 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: References: Message-ID: Just to follow up on this subject - a new toolkit called Cekit has been released [1] (Cekit is a replacement for Concreate). It supports ODCS repositories so it should be possible to build a community image from it. IMO, we should start looking at it now or after GA is released. Even though the second approach (after GA) makes much more sense, the release cycle will be much longer since then. Thanks, Sebastian [1] https://github.com/cekit/cekit/releases/tag/2.0.0rc1 On Wed, Mar 7, 2018 at 12:14 PM Galder Zamarre?o wrote: > Sebastian Laskawiec writes: > > > On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarre?o > > wrote: > > > > Sebastian Laskawiec writes: > > > > > Hey Galder, > > > > > > Comments inlined. > > > > > > Thanks, > > > Seb > > > > > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o > > > > > wrote: > > > > > > Hi, > > > > > > Looking at [1] and I'm wondering why the templates have to > > > maintain a > > > different XML file for OpenShift? > > > > > > We already ship an XML in the server called `cloud.xml`, that > > > should > > > just work. Having a separate XML file in the templates means > > we're > > > duplicating the maintainance of XML files. > > > > > > Also, users can now create caches programmatically. This is by > > far > > > the > > > most common tweak that had to be done to the config. So, I see > > the > > > urgency to change XML files less immediate. > > > > > > So just to give you guys a bit more context - the templates were > > > created pretty long time ago when we didn't have admin > > capabilities in > > > Hot Rod and REST. The main argument for putting the whole > > > configuration into a ConfigMap was to make configuration changes > > > easier for the users. With ConfigMap approach they can log into > > > OpenShift UI, go to Resources -> ConfigMaps and edit everything > > using > > > UI. That's super convenient for hacking in my opinion. Of > > course, you > > > don't need to do that at all if you don't want. You can just > > spin up a > > > new Infinispan cluster using `oc new-app`. > > > > I agree with the usability of the ConfigMap. However, the > > duplication is > > very annoying. Would it be possible for the ConfigMap to be > > created on > > the fly out of the cloud.xml that's shipped by Infinispan Server? > > That > > way we'd still have a ConfigMap without having to duplicate XML. > > > > Probably not. This would require special permissions to call > > Kubernetes API from the Pod. In other words, I can't think about any > > other way that would work in OpenShift Online for the instance. > > > > > There are at least two other ways for changing the configuration > > that > > > I can think of. The first one is S2I [1][2] (long story short, > > you > > > need to put your configuration into a git repository and tell > > > OpenShift to build an image based on it). Even though it may > > seem very > > > convenient, it's OpenShift only solution (and there are no easy > > (out > > > of the box) options to get this running on raw Kubernetes). I'm > > not > > > judging whether it's good or bad here, just telling you how it > > works. > > > The other option would be to tell the users to do exactly the > > same > > > things we do in our templates themselves. In other words we > > would > > > remove configuration from the templates and provide a manual for > > the > > > users how to deal with configuration. I believe this is exactly > > what > > > Galder is suggesting, right? > > > > What we do in the templates right now to show users how to tweak > > their > > config is in convoluted. > > > > Ideally, adding their own custom configuration should be just a > > matter > > of: > > > > 1. Creating a ConfigMap yaml pointing to an XML. > > 2. Ask users to put their XML in a separate file pointed by the > > ConfigMap. > > 3. Deploy ConfigMap and XML. > > 4. Trigger a new Infinispan redeployment. > > > > That would probably need to be a new deployment. Most of the > > StatefulSet spec is immutable. > > > > Not sure how doable this is with the current template approach, or > > we > > could explain how to do this for an already up and running > > application > > that has Infinispan created out of the default template? > > > > I've been thinking about this for a while and this is what I think we > > should do: > > > > 1 Wait a couple of weeks and review the community image created by the > > CE Team. See if this is a good fit for us. If it is, I would focus > > on adopting this approach and adjust our templates to handle it. > > 2 Whether or not we adopt the CE community work, we could put all > > necessary stuff into cloud.xml or services.xml configuration. We > > could do one step forward and merge them together. > > 3 Make sure that dynamically created caches are persisted (this is > > super important!!) > > 4 Once #3 is verified we should have a decision whether or not we are > > adopting the CE way. At this point we could document how to use > > custom configuration with a ConfigMap and drop it from the > > templates. > > > > WDYT? Does this plan makes sense to you? > > Sounds good > > > > > > > > > Recently we implemented admin commands in the Hot Rod. Assuming > > that > > > caches created this way are not wiped out during restart (that > > needs > > > to be checked), we could remove the configuration from the > > templates > > > and tell the users to create their caches over Hot Rod and REST. > > > However we still need to have a back door for modifying > > configuration > > > manually since there are some changes that can not be done via > > admin > > > API. > > > > > > [1] https://github.com/openshift/source-to-image > > > [2] > > > > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble > > > > > > > > > > > Sure, there will always be people who modify/tweak things and > > > that's > > > fine. We should however show the people how to do that in a way > > > that > > > doesn't require us to duplicate our maintanence work. > > > > > > If we think about further maintenance, I believe we should take > > more > > > things into consideration. During the last planning meeting > > Tristan > > > mentioned about bringing the project and the product closer > > together. > > > On the Cloud Enablement side of things there are ongoing > > experiments > > > to get a community images out. > > > > > > If we decided to take this direction (the CE way), our templates > > would > > > need to be deprecated or will change drastically. The image will > > react > > > on different set of variables and configuration options. > > > > > > Also, if we want to show the users how to use a custom XML file, > > I > > > don't > > > think we should show them how to embedd it in the template as > > JSON > > > [2]. It's quite a pain. Instead, the XML should be kept as a > > > separate > > > file and the JSON file reference it. > > > > > > I'm still struggling to understand why this is a pain. Could you > > > please explain it a bit more? If you look into the maintenance > > guide > > > [3], there are only a few steps. For me it takes no longer than > > 15 > > > minutes to do the upgrade. You also mentioned on IRC that this > > > approach is a pain for our users (I believe you mentioned > > something > > > about Ray). I also can not understand why, could you please > > explain it > > > a bit more? > > > > > > [3] > > > > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > > > > > > > > Cheers, > > > > > > [1] > > > > > > https://github.com/infinispan/infinispan-openshift-templates/pull/16/files > > > > > > > > [2] > > > > > > https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180514/db6a7d7e/attachment.html From gustavo at infinispan.org Mon May 14 09:51:58 2018 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 14 May 2018 14:51:58 +0100 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: References: Message-ID: Given that the docs mention that "Cekit and Concreate are the very same tool, Concreate was rename to Cekit in 2.0 release.", does it change the outcome of the discussion in [1]? [1] https://www.mail-archive.com/infinispan-dev at lists.jboss.org/msg10847.html Thanks, Gustavo On Mon, May 14, 2018 at 2:32 PM, Sebastian Laskawiec wrote: > Just to follow up on this subject - a new toolkit called Cekit has been > released [1] (Cekit is a replacement for Concreate). It supports ODCS > repositories so it should be possible to build a community image from it. > > IMO, we should start looking at it now or after GA is released. Even > though the second approach (after GA) makes much more sense, the release > cycle will be much longer since then. > > Thanks, > Sebastian > > [1] https://github.com/cekit/cekit/releases/tag/2.0.0rc1 > > On Wed, Mar 7, 2018 at 12:14 PM Galder Zamarre?o > wrote: > >> Sebastian Laskawiec writes: >> >> > On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarre?o >> > wrote: >> > >> > Sebastian Laskawiec writes: >> > >> > > Hey Galder, >> > > >> > > Comments inlined. >> > > >> > > Thanks, >> > > Seb >> > > >> > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o >> > >> > > wrote: >> > > >> > > Hi, >> > > >> > > Looking at [1] and I'm wondering why the templates have to >> > > maintain a >> > > different XML file for OpenShift? >> > > >> > > We already ship an XML in the server called `cloud.xml`, that >> > > should >> > > just work. Having a separate XML file in the templates means >> > we're >> > > duplicating the maintainance of XML files. >> > > >> > > Also, users can now create caches programmatically. This is by >> > far >> > > the >> > > most common tweak that had to be done to the config. So, I see >> > the >> > > urgency to change XML files less immediate. >> > > >> > > So just to give you guys a bit more context - the templates were >> > > created pretty long time ago when we didn't have admin >> > capabilities in >> > > Hot Rod and REST. The main argument for putting the whole >> > > configuration into a ConfigMap was to make configuration changes >> > > easier for the users. With ConfigMap approach they can log into >> > > OpenShift UI, go to Resources -> ConfigMaps and edit everything >> > using >> > > UI. That's super convenient for hacking in my opinion. Of >> > course, you >> > > don't need to do that at all if you don't want. You can just >> > spin up a >> > > new Infinispan cluster using `oc new-app`. >> > >> > I agree with the usability of the ConfigMap. However, the >> > duplication is >> > very annoying. Would it be possible for the ConfigMap to be >> > created on >> > the fly out of the cloud.xml that's shipped by Infinispan Server? >> > That >> > way we'd still have a ConfigMap without having to duplicate XML. >> > >> > Probably not. This would require special permissions to call >> > Kubernetes API from the Pod. In other words, I can't think about any >> > other way that would work in OpenShift Online for the instance. >> > >> > > There are at least two other ways for changing the configuration >> > that >> > > I can think of. The first one is S2I [1][2] (long story short, >> > you >> > > need to put your configuration into a git repository and tell >> > > OpenShift to build an image based on it). Even though it may >> > seem very >> > > convenient, it's OpenShift only solution (and there are no easy >> > (out >> > > of the box) options to get this running on raw Kubernetes). I'm >> > not >> > > judging whether it's good or bad here, just telling you how it >> > works. >> > > The other option would be to tell the users to do exactly the >> > same >> > > things we do in our templates themselves. In other words we >> > would >> > > remove configuration from the templates and provide a manual for >> > the >> > > users how to deal with configuration. I believe this is exactly >> > what >> > > Galder is suggesting, right? >> > >> > What we do in the templates right now to show users how to tweak >> > their >> > config is in convoluted. >> > >> > Ideally, adding their own custom configuration should be just a >> > matter >> > of: >> > >> > 1. Creating a ConfigMap yaml pointing to an XML. >> > 2. Ask users to put their XML in a separate file pointed by the >> > ConfigMap. >> > 3. Deploy ConfigMap and XML. >> > 4. Trigger a new Infinispan redeployment. >> > >> > That would probably need to be a new deployment. Most of the >> > StatefulSet spec is immutable. >> > >> > Not sure how doable this is with the current template approach, or >> > we >> > could explain how to do this for an already up and running >> > application >> > that has Infinispan created out of the default template? >> > >> > I've been thinking about this for a while and this is what I think we >> > should do: >> > >> > 1 Wait a couple of weeks and review the community image created by the >> > CE Team. See if this is a good fit for us. If it is, I would focus >> > on adopting this approach and adjust our templates to handle it. >> > 2 Whether or not we adopt the CE community work, we could put all >> > necessary stuff into cloud.xml or services.xml configuration. We >> > could do one step forward and merge them together. >> > 3 Make sure that dynamically created caches are persisted (this is >> > super important!!) >> > 4 Once #3 is verified we should have a decision whether or not we are >> > adopting the CE way. At this point we could document how to use >> > custom configuration with a ConfigMap and drop it from the >> > templates. >> > >> > WDYT? Does this plan makes sense to you? >> >> Sounds good >> >> > >> > > >> > > Recently we implemented admin commands in the Hot Rod. Assuming >> > that >> > > caches created this way are not wiped out during restart (that >> > needs >> > > to be checked), we could remove the configuration from the >> > templates >> > > and tell the users to create their caches over Hot Rod and REST. >> > > However we still need to have a back door for modifying >> > configuration >> > > manually since there are some changes that can not be done via >> > admin >> > > API. >> > > >> > > [1] https://github.com/openshift/source-to-image >> > > [2] >> > > >> > https://github.com/jboss-dockerfiles/infinispan/blob/ >> master/server/.s2i/bin/assemble >> > >> > > >> > > >> > > Sure, there will always be people who modify/tweak things and >> > > that's >> > > fine. We should however show the people how to do that in a way >> > > that >> > > doesn't require us to duplicate our maintanence work. >> > > >> > > If we think about further maintenance, I believe we should take >> > more >> > > things into consideration. During the last planning meeting >> > Tristan >> > > mentioned about bringing the project and the product closer >> > together. >> > > On the Cloud Enablement side of things there are ongoing >> > experiments >> > > to get a community images out. >> > > >> > > If we decided to take this direction (the CE way), our templates >> > would >> > > need to be deprecated or will change drastically. The image will >> > react >> > > on different set of variables and configuration options. >> > > >> > > Also, if we want to show the users how to use a custom XML file, >> > I >> > > don't >> > > think we should show them how to embedd it in the template as >> > JSON >> > > [2]. It's quite a pain. Instead, the XML should be kept as a >> > > separate >> > > file and the JSON file reference it. >> > > >> > > I'm still struggling to understand why this is a pain. Could you >> > > please explain it a bit more? If you look into the maintenance >> > guide >> > > [3], there are only a few steps. For me it takes no longer than >> > 15 >> > > minutes to do the upgrade. You also mentioned on IRC that this >> > > approach is a pain for our users (I believe you mentioned >> > something >> > > about Ray). I also can not understand why, could you please >> > explain it >> > > a bit more? >> > > >> > > [3] >> > > >> > https://github.com/infinispan/infinispan-openshift-templates# >> maintenance-guide >> > >> > > >> > > >> > > Cheers, >> > > >> > > [1] >> > > >> > https://github.com/infinispan/infinispan- >> openshift-templates/pull/16/files >> > >> > > >> > > [2] >> > > >> > https://github.com/infinispan/infinispan-openshift-templates# >> maintenance-guide >> > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180514/9f9ee45f/attachment-0001.html From galder at redhat.com Wed May 16 03:37:53 2018 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 16 May 2018 09:37:53 +0200 Subject: [infinispan-dev] Passing client listener parameters programmatically In-Reply-To: <6fb0779b-acd6-821c-9a85-af67f0296a02@redhat.com> References: <6fb0779b-acd6-821c-9a85-af67f0296a02@redhat.com> Message-ID: I've created a JIRA to track this: https://issues.jboss.org/browse/ISPN-9151 On Mon, Apr 16, 2018 at 10:21 AM Adrian Nistor wrote: > +1 for both points. > > And I absolutely have to add that I never liked the annotation based > listeners, both the embedded and the remote ones. > > On 04/16/2018 10:48 AM, Dan Berindei wrote: > > +1 to not require annotations, but -100 to ignore the annotations if > present, we should throw an exception instead. > > Dan > > On Fri, Apr 13, 2018 at 9:57 PM, William Burns > wrote: > >> I personally have never been a fan of the whole annotation thing to >> configure your listener, unfortunately it just has been this way. >> >> If you are just proposing to adding a new addClientListener method that >> takes those arguments, I don't have a problem with it. >> >> void addClientListener(Object listener, String filterFactoryName, >> Object[] filterFactoryParams, String converterFactoryName, Object[] >> converterFactoryParams); >> >> I would think we would use these values only and ignore any defined on >> the annotation. >> >> >> Also similar to this but I have some API ideas I would love to explore >> for ISPN 10 surrounding events and the consumption of them. >> >> - Will >> >> On Fri, Apr 13, 2018 at 11:12 AM Galder Zamarreno >> wrote: >> >>> Hi, >>> >>> We're working with the OpenWhisk team to create a generic Feed that >>> allows Infinispan remote events to be exposed in an OpenWhisk way. >>> >>> So, you'd pass in Hot Rod endpoint information, name of cache and other >>> details and you'd establish a feed of data from that cache for >>> create/updated/removed data. >>> >>> However, making this generic is tricky when you want to pass in >>> filter/converter factory names since these are defined at the annotation >>> level. >>> >>> Ideally we should have a way to pass in filter/converter factory names >>> programmatically. To avoid limiting ourselves, you could potentially pass >>> in an instance of the annotation in an overloaded method or as optional >>> parameter [1]. >>> >>> Thoughts? >>> >>> Cheers, >>> Galder >>> >>> [1] >>> https://stackoverflow.com/questions/16299717/how-to-create-an-instance-of-an-annotation >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180516/6f6294a9/attachment.html From slaskawi at redhat.com Thu May 17 04:27:18 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 17 May 2018 10:27:18 +0200 Subject: [infinispan-dev] Maintenance of OpenShift templates In-Reply-To: References: Message-ID: It's exactly opposite ;) But one, big thing changed after the conversation you linked had happened - we decided to bring both project and product closer together. There is no other way to do it, than using Concreate/Cekit unfortunately. Unfortunately the alternative to this proposal means spending cycles on maintaining two, concurrent solutions - community Dockerfile approach, and product Concreate/Cekit approach. On Mon, May 14, 2018 at 3:53 PM Gustavo Fernandes wrote: > Given that the docs mention that "Cekit and Concreate are the very same > tool, Concreate was rename to Cekit in 2.0 release.", does it change the > outcome of the discussion in [1]? > > [1] > https://www.mail-archive.com/infinispan-dev at lists.jboss.org/msg10847.html > > Thanks, > Gustavo > > > On Mon, May 14, 2018 at 2:32 PM, Sebastian Laskawiec > wrote: > >> Just to follow up on this subject - a new toolkit called Cekit has been >> released [1] (Cekit is a replacement for Concreate). It supports ODCS >> repositories so it should be possible to build a community image from it. >> >> IMO, we should start looking at it now or after GA is released. Even >> though the second approach (after GA) makes much more sense, the release >> cycle will be much longer since then. >> >> Thanks, >> Sebastian >> >> [1] https://github.com/cekit/cekit/releases/tag/2.0.0rc1 >> >> On Wed, Mar 7, 2018 at 12:14 PM Galder Zamarre?o >> wrote: >> >>> Sebastian Laskawiec writes: >>> >>> > On Tue, Mar 6, 2018 at 5:11 PM Galder Zamarre?o >>> > wrote: >>> > >>> > Sebastian Laskawiec writes: >>> > >>> > > Hey Galder, >>> > > >>> > > Comments inlined. >>> > > >>> > > Thanks, >>> > > Seb >>> > > >>> > > On Fri, Mar 2, 2018 at 11:37 AM Galder Zamarre?o >>> > >>> > > wrote: >>> > > >>> > > Hi, >>> > > >>> > > Looking at [1] and I'm wondering why the templates have to >>> > > maintain a >>> > > different XML file for OpenShift? >>> > > >>> > > We already ship an XML in the server called `cloud.xml`, that >>> > > should >>> > > just work. Having a separate XML file in the templates means >>> > we're >>> > > duplicating the maintainance of XML files. >>> > > >>> > > Also, users can now create caches programmatically. This is by >>> > far >>> > > the >>> > > most common tweak that had to be done to the config. So, I see >>> > the >>> > > urgency to change XML files less immediate. >>> > > >>> > > So just to give you guys a bit more context - the templates were >>> > > created pretty long time ago when we didn't have admin >>> > capabilities in >>> > > Hot Rod and REST. The main argument for putting the whole >>> > > configuration into a ConfigMap was to make configuration changes >>> > > easier for the users. With ConfigMap approach they can log into >>> > > OpenShift UI, go to Resources -> ConfigMaps and edit everything >>> > using >>> > > UI. That's super convenient for hacking in my opinion. Of >>> > course, you >>> > > don't need to do that at all if you don't want. You can just >>> > spin up a >>> > > new Infinispan cluster using `oc new-app`. >>> > >>> > I agree with the usability of the ConfigMap. However, the >>> > duplication is >>> > very annoying. Would it be possible for the ConfigMap to be >>> > created on >>> > the fly out of the cloud.xml that's shipped by Infinispan Server? >>> > That >>> > way we'd still have a ConfigMap without having to duplicate XML. >>> > >>> > Probably not. This would require special permissions to call >>> > Kubernetes API from the Pod. In other words, I can't think about any >>> > other way that would work in OpenShift Online for the instance. >>> > >>> > > There are at least two other ways for changing the configuration >>> > that >>> > > I can think of. The first one is S2I [1][2] (long story short, >>> > you >>> > > need to put your configuration into a git repository and tell >>> > > OpenShift to build an image based on it). Even though it may >>> > seem very >>> > > convenient, it's OpenShift only solution (and there are no easy >>> > (out >>> > > of the box) options to get this running on raw Kubernetes). I'm >>> > not >>> > > judging whether it's good or bad here, just telling you how it >>> > works. >>> > > The other option would be to tell the users to do exactly the >>> > same >>> > > things we do in our templates themselves. In other words we >>> > would >>> > > remove configuration from the templates and provide a manual for >>> > the >>> > > users how to deal with configuration. I believe this is exactly >>> > what >>> > > Galder is suggesting, right? >>> > >>> > What we do in the templates right now to show users how to tweak >>> > their >>> > config is in convoluted. >>> > >>> > Ideally, adding their own custom configuration should be just a >>> > matter >>> > of: >>> > >>> > 1. Creating a ConfigMap yaml pointing to an XML. >>> > 2. Ask users to put their XML in a separate file pointed by the >>> > ConfigMap. >>> > 3. Deploy ConfigMap and XML. >>> > 4. Trigger a new Infinispan redeployment. >>> > >>> > That would probably need to be a new deployment. Most of the >>> > StatefulSet spec is immutable. >>> > >>> > Not sure how doable this is with the current template approach, or >>> > we >>> > could explain how to do this for an already up and running >>> > application >>> > that has Infinispan created out of the default template? >>> > >>> > I've been thinking about this for a while and this is what I think we >>> > should do: >>> > >>> > 1 Wait a couple of weeks and review the community image created by the >>> > CE Team. See if this is a good fit for us. If it is, I would focus >>> > on adopting this approach and adjust our templates to handle it. >>> > 2 Whether or not we adopt the CE community work, we could put all >>> > necessary stuff into cloud.xml or services.xml configuration. We >>> > could do one step forward and merge them together. >>> > 3 Make sure that dynamically created caches are persisted (this is >>> > super important!!) >>> > 4 Once #3 is verified we should have a decision whether or not we are >>> > adopting the CE way. At this point we could document how to use >>> > custom configuration with a ConfigMap and drop it from the >>> > templates. >>> > >>> > WDYT? Does this plan makes sense to you? >>> >>> Sounds good >>> >>> > >>> > > >>> > > Recently we implemented admin commands in the Hot Rod. Assuming >>> > that >>> > > caches created this way are not wiped out during restart (that >>> > needs >>> > > to be checked), we could remove the configuration from the >>> > templates >>> > > and tell the users to create their caches over Hot Rod and REST. >>> > > However we still need to have a back door for modifying >>> > configuration >>> > > manually since there are some changes that can not be done via >>> > admin >>> > > API. >>> > > >>> > > [1] https://github.com/openshift/source-to-image >>> > > [2] >>> > > >>> > >>> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/.s2i/bin/assemble >>> > >>> > > >>> > > >>> > > Sure, there will always be people who modify/tweak things and >>> > > that's >>> > > fine. We should however show the people how to do that in a way >>> > > that >>> > > doesn't require us to duplicate our maintanence work. >>> > > >>> > > If we think about further maintenance, I believe we should take >>> > more >>> > > things into consideration. During the last planning meeting >>> > Tristan >>> > > mentioned about bringing the project and the product closer >>> > together. >>> > > On the Cloud Enablement side of things there are ongoing >>> > experiments >>> > > to get a community images out. >>> > > >>> > > If we decided to take this direction (the CE way), our templates >>> > would >>> > > need to be deprecated or will change drastically. The image will >>> > react >>> > > on different set of variables and configuration options. >>> > > >>> > > Also, if we want to show the users how to use a custom XML file, >>> > I >>> > > don't >>> > > think we should show them how to embedd it in the template as >>> > JSON >>> > > [2]. It's quite a pain. Instead, the XML should be kept as a >>> > > separate >>> > > file and the JSON file reference it. >>> > > >>> > > I'm still struggling to understand why this is a pain. Could you >>> > > please explain it a bit more? If you look into the maintenance >>> > guide >>> > > [3], there are only a few steps. For me it takes no longer than >>> > 15 >>> > > minutes to do the upgrade. You also mentioned on IRC that this >>> > > approach is a pain for our users (I believe you mentioned >>> > something >>> > > about Ray). I also can not understand why, could you please >>> > explain it >>> > > a bit more? >>> > > >>> > > [3] >>> > > >>> > >>> https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide >>> > >>> > > >>> > > >>> > > Cheers, >>> > > >>> > > [1] >>> > > >>> > >>> https://github.com/infinispan/infinispan-openshift-templates/pull/16/files >>> > >>> > > >>> > > [2] >>> > > >>> > >>> https://github.com/infinispan/infinispan-openshift-templates#maintenance-guide >>> > >>> > > >>> > > _______________________________________________ >>> > > infinispan-dev mailing list >>> > > infinispan-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > > >>> > > >>> > > _______________________________________________ >>> > > infinispan-dev mailing list >>> > > infinispan-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180517/a3e5d007/attachment-0001.html From vrigamon at redhat.com Mon May 28 09:15:31 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Mon, 28 May 2018 15:15:31 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC Message-ID: Hi Infinispan developers, I'm working on a solution for developers who need to access Infinispan services through different programming languages. The focus is not on developing a full featured client, but rather discover the value and the limits of this approach. - is it possible to automatically generate useful clients in different languages? - can that clients interoperate on the same cache with the same data types? I came out with a small prototype that I would like to submit to you and on which I would like to gather your impressions. You can found the project here [1]: is a gRPC-based client/server architecture for Infinispan based on and EmbeddedCache, with very few features exposed atm. Currently the project is nothing more than a poc with the following interesting features: - client can be generated in all the grpc supported language: java, go, c++ examples are provided; - the interface is full typed. No need for marshaller and clients build in different language can cooperate on the same cache; The second item is my preferred one beacuse it frees the developer from data marshalling. What do you think about? Sounds interesting? Can you see any flaw? There's also a list of issues for the future [2], basically I would like to investigate these questions: How far this architecture can go? Topology, events, queries... how many of the Infinispan features can be fit in a grpc architecture? Thank you Vittorio [1] https://github.com/rigazilla/ispn-grpc [2] https://github.com/rigazilla/ispn-grpc/issues -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180528/9a19736e/attachment.html From anistor at redhat.com Mon May 28 10:47:27 2018 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 28 May 2018 17:47:27 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: Message-ID: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Hi Vittorio, thanks for exploring gRPC. It seems like a very elegant solution for exposing services. I'll have a look at your PoC soon. I feel there are some remarks that need to be made regarding gRPC. gRPC is just some nice cheesy topping on top of protobuf. Google's implementation of protobuf, to be more precise. It does not need handwritten marshallers, but the 'No need for marshaller' does not accurately describe it. Marshallers are needed and are generated under the cover by the library and so are the data objects and you are unfortunately forced to use them. That's both the good news and the bad news:) The whole thing looks very promising and friendly for many uses cases, especially for demos and PoCs :))). Nobody wants to write those marshallers. But it starts to become a nuisance if you want to use your own data objects. There is also the ugliness and excessive memory footprint of the generated code, which is the reason Infinispan did not adopt the protobuf-java library although it did adopt protobuf as an encoding format. The Protostream library was created as an alternative implementation to solve the aforementioned problems with the generated code. It solves this by letting the user provide their own data objects. And for the marshallers it gives you two options: a) write the marshaller yourself (hated), b) annotated your data objects and the marshaller gets generated (loved). Protostream does not currently support service definitions right now but this is something I started to investigate recently after Galder asked me if I think it's doable. I think I'll only find out after I do it:) Adrian On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: > Hi Infinispan developers, > > I'm working on a solution for developers who need to access Infinispan > services? through different programming languages. > > The focus is not on developing a full featured client, but rather > discover the value and the limits of this approach. > > - is it possible to automatically generate useful clients in different > languages? > - can that clients interoperate on the same cache with the same data > types? > > I came out with a small prototype that I would like to submit to you > and on which I would like to gather your impressions. > > ?You can found the project here [1]: is a gRPC-based client/server > architecture for Infinispan based on and EmbeddedCache, with very few > features exposed atm. > > Currently the project is nothing more than a poc with the following > interesting features: > > - client can be generated in all the grpc supported language: java, > go, c++ examples are provided; > - the interface is full typed. No need for marshaller and clients > build in different language can cooperate on the same cache; > > The second item is my preferred one beacuse it frees the developer > from data marshalling. > > What do you think about? > Sounds interesting? > Can you see any flaw? > > There's also a list of issues for the future [2], basically I would > like to investigate these questions: > How far this architecture can go? > Topology, events, queries... how many of the Infinispan features can > be fit in a grpc architecture? > > Thank you > Vittorio > > [1] https://github.com/rigazilla/ispn-grpc > > [2] https://github.com/rigazilla/ispn-grpc/issues > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180528/6a08aa1c/attachment.html From galder at redhat.com Tue May 29 04:35:52 2018 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 29 May 2018 10:35:52 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: Hi all, @Vittorio, thanks a lot for working on this! Let me explain some of the background behind this effort so that we're all on the same page: The biggest problem I see in our client/server architecture is the ability to quickly deliver features/APIs across multiple language clients. Both Vittorio and I have seen how long it takes to implement all the different features available in Java client and port them to Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to improve on that by having some of that work done for us. Granted, not all of it will be done, but it should give us some good foundations on which to build. One thing I mentioned to Vittorio is that he should investigate what the performance impact of using gRPC is. This is crucial to decide whether to take this forward or not. This should really have been done by now so that other devs are aware of the cost in terms of latency and memory consumption. As you can see from the first comment, there are already concerns with its memory consumption. So, this needs to be done ASAP so that we're aware of the consequences right away. Also, when I looked at gRPC, I was considering having the base layer use only bytes, and we'd build the marshallers/encoders...etc we need on top. Maybe both approaches can be compared from the POV of performance. If gRPC performance is not up to scratch, we have the contacts to see if things can be improved. Once again, as I mentioned to Vittorio separately, if we can't rely on gRPC (or similar tool), it'd be nice to have just a C client (or a more typesafe client that compiles into C, e.g. Rust) that uses protobuf serialized messages and get any other language to be a wrapper of that. This is possible with Node.js and Haskell for example. With Java this is not currently an option since JNI is slow and cumbersome but maybe with Project Panama [4] this won't be problem in the future. So maybe a Java (w/ Netty) and C clients and the rest interfacing to them would be the way if gRPC does not work out. Cheers On Mon, May 28, 2018 at 4:50 PM Adrian Nistor wrote: > Hi Vittorio, > thanks for exploring gRPC. It seems like a very elegant solution for > exposing services. I'll have a look at your PoC soon. > > I feel there are some remarks that need to be made regarding gRPC. gRPC is > just some nice cheesy topping on top of protobuf. Google's implementation > of protobuf, to be more precise. > It does not need handwritten marshallers, but the 'No need for marshaller' > does not accurately describe it. Marshallers are needed and are generated > under the cover by the library and so are the data objects and you are > unfortunately forced to use them. That's both the good news and the bad > news:) The whole thing looks very promising and friendly for many uses > cases, especially for demos and PoCs :))). Nobody wants to write those > marshallers. But it starts to become a nuisance if you want to use your own > data objects. > There is also the ugliness and excessive memory footprint of the generated > code, which is the reason Infinispan did not adopt the protobuf-java > library although it did adopt protobuf as an encoding format. > The Protostream library was created as an alternative implementation to > solve the aforementioned problems with the generated code. It solves this > by letting the user provide their own data objects. And for the marshallers > it gives you two options: a) write the marshaller yourself (hated), b) > annotated your data objects and the marshaller gets generated (loved). > Protostream does not currently support service definitions right now but > this is something I started to investigate recently after Galder asked me > if I think it's doable. I think I'll only find out after I do it:) > > > Adrian > > > On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: > > Hi Infinispan developers, > > I'm working on a solution for developers who need to access Infinispan > services through different programming languages. > > The focus is not on developing a full featured client, but rather discover > the value and the limits of this approach. > > - is it possible to automatically generate useful clients in different > languages? > - can that clients interoperate on the same cache with the same data types? > > I came out with a small prototype that I would like to submit to you and > on which I would like to gather your impressions. > > You can found the project here [1]: is a gRPC-based client/server > architecture for Infinispan based on and EmbeddedCache, with very few > features exposed atm. > > Currently the project is nothing more than a poc with the following > interesting features: > > - client can be generated in all the grpc supported language: java, go, > c++ examples are provided; > - the interface is full typed. No need for marshaller and clients build in > different language can cooperate on the same cache; > > The second item is my preferred one beacuse it frees the developer from > data marshalling. > > What do you think about? > Sounds interesting? > Can you see any flaw? > > There's also a list of issues for the future [2], basically I would like > to investigate these questions: > How far this architecture can go? > Topology, events, queries... how many of the Infinispan features can be > fit in a grpc architecture? > > Thank you > Vittorio > > [1] https://github.com/rigazilla/ispn-grpc > [2] https://github.com/rigazilla/ispn-grpc/issues > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/e77b1e0d/attachment-0001.html From belaban at mailbox.org Tue May 29 04:55:35 2018 From: belaban at mailbox.org (Bela Ban) Date: Tue, 29 May 2018 10:55:35 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: FYI, I've also been looking into gRPC, as a tool to provide a (JGroups) version-independent service that all traffic is sent to and received from during a rolling cluster upgrade [1]. The focus of this is *version independence*, ie. have 3.6 and 4.x nodes talk to each other. A non-requirement is performance or memory consumption, as the service is only used during an upgrade (typically a couple of seconds). [1] https://github.com/jgroups-extras/RollingUpgrades/blob/master/common/src/main/proto/relay.proto On 28/05/18 16:47, Adrian Nistor wrote: > Hi Vittorio, > thanks for exploring gRPC. It seems like a very elegant solution for > exposing services. I'll have a look at your PoC soon. > > I feel there are some remarks that need to be made regarding gRPC. gRPC > is just some nice cheesy topping on top of protobuf. Google's > implementation of protobuf, to be more precise. > It does not need handwritten marshallers, but the 'No need for > marshaller' does not accurately describe it. Marshallers are needed and > are generated under the cover by the library and so are the data objects > and you are unfortunately forced to use them. That's both the good news > and the bad news:) The whole thing looks very promising and friendly for > many uses cases, especially for demos and PoCs :))). Nobody wants to > write those marshallers. But it starts to become a nuisance if you want > to use your own data objects. > There is also the ugliness and excessive memory footprint of the > generated code, which is the reason Infinispan did not adopt the > protobuf-java library although it did adopt protobuf as an encoding format. > The Protostream library was created as an alternative implementation to > solve the aforementioned problems with the generated code. It solves > this by letting the user provide their own data objects. And for the > marshallers it gives you two options: a) write the marshaller yourself > (hated), b) annotated your data objects and the marshaller gets > generated (loved). Protostream does not currently support service > definitions right now but this is something I started to investigate > recently after Galder asked me if I think it's doable. I think I'll only > find out after I do it:) > > Adrian > > On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access Infinispan >> services? through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in different >> languages? >> - can that clients interoperate on the same cache with the same data >> types? >> >> I came out with a small prototype that I would like to submit to you >> and on which I would like to gather your impressions. >> >> ?You can found the project here [1]: is a gRPC-based client/server >> architecture for Infinispan based on and EmbeddedCache, with very few >> features exposed atm. >> >> Currently the project is nothing more than a poc with the following >> interesting features: >> >> - client can be generated in all the grpc supported language: java, >> go, c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients >> build in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the developer >> from data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I would >> like to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features can >> be fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban | http://www.jgroups.org From rory.odonnell at oracle.com Tue May 29 05:47:17 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 29 May 2018 10:47:17 +0100 Subject: [infinispan-dev] JDK 11 Early Access build 15 is available for download. Message-ID: Hi Galder, **JDK 11 EA build 15 , *****under both the GPL and Oracle EA licenses, is now available at **http://jdk.java.net/11**. ** * * Newly approved Schedule, status & features o http://openjdk.java.net/projects/jdk/11/ * Release Notes: o http://jdk.java.net/11/release-notes * Summary of changes o http://jdk.java.net/11/changes *Notable changes in JDK 11 EA builds since last email:* * b15 - JDK-8201627 - Kerberos sequence number issues * b13 - JDK-8200146 - Removal of appletviewer launcher o deprecated in JDK 9 and has been removed in this release * b13 - JDK-8201793 - java.lang.ref.Reference does not support cloning ** ** JEPs proposed to target JDK 11 (review ends 2018/05/31 23:00 UTC) 330: Launch Single-File Source-Code Programs JEPs targeted to JDK 11, so far 309: Dynamic Class-File Constants 318: Epsilon: A No-Op Garbage Collector 320: Remove the Java EE and CORBA Modules 321: HTTP Client (Standard) 323: Local-Variable Syntax for Lambda Parameters 324: Key Agreement with Curve25519 and Curve448 327: Unicode 10 328: Flight Recorder 329: ChaCha20 and Poly1305 Cryptographic Algorithms Finally, Initial TLSv1.3 implementation Released to the Open Sandbox. Please note well: this branch is under very active development and is not final by any means. Also note: by releasing this code, we are not committing a specific release or timeframe. We will continue development and fixing bugs until the code is ready for inclusion in the JDK. We welcome your feedback, more info [1] Regards, Rory [1] http://mail.openjdk.java.net/pipermail/security-dev/2018-May/017139.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/8cc33674/attachment.html From vrigamon at redhat.com Tue May 29 08:45:20 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Tue, 29 May 2018 14:45:20 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: Thanks Adrian, of course there's a marshalling work under the cover and that is reflected into the generated code (specially the accessor methods generated from the oneof clause). My opinion is that on the client side this could be accepted, as long as the API are well defined and documented: application developer can build an adhoc decorator on the top if needed. The alternative to this is to develop a protostream equivalent for each supported language and it doesn't seem really feasible to me. On the server side (java only) the situation is different: protobuf is optimized for streaming not for storing so probably a Protostream layer is needed. On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor wrote: > Hi Vittorio, > thanks for exploring gRPC. It seems like a very elegant solution for > exposing services. I'll have a look at your PoC soon. > > I feel there are some remarks that need to be made regarding gRPC. gRPC is > just some nice cheesy topping on top of protobuf. Google's implementation > of protobuf, to be more precise. > It does not need handwritten marshallers, but the 'No need for marshaller' > does not accurately describe it. Marshallers are needed and are generated > under the cover by the library and so are the data objects and you are > unfortunately forced to use them. That's both the good news and the bad > news:) The whole thing looks very promising and friendly for many uses > cases, especially for demos and PoCs :))). Nobody wants to write those > marshallers. But it starts to become a nuisance if you want to use your own > data objects. > There is also the ugliness and excessive memory footprint of the generated > code, which is the reason Infinispan did not adopt the protobuf-java > library although it did adopt protobuf as an encoding format. > The Protostream library was created as an alternative implementation to > solve the aforementioned problems with the generated code. It solves this > by letting the user provide their own data objects. And for the marshallers > it gives you two options: a) write the marshaller yourself (hated), b) > annotated your data objects and the marshaller gets generated (loved). > Protostream does not currently support service definitions right now but > this is something I started to investigate recently after Galder asked me > if I think it's doable. I think I'll only find out after I do it:) > > Adrian > > > On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: > > Hi Infinispan developers, > > I'm working on a solution for developers who need to access Infinispan > services through different programming languages. > > The focus is not on developing a full featured client, but rather discover > the value and the limits of this approach. > > - is it possible to automatically generate useful clients in different > languages? > - can that clients interoperate on the same cache with the same data types? > > I came out with a small prototype that I would like to submit to you and > on which I would like to gather your impressions. > > You can found the project here [1]: is a gRPC-based client/server > architecture for Infinispan based on and EmbeddedCache, with very few > features exposed atm. > > Currently the project is nothing more than a poc with the following > interesting features: > > - client can be generated in all the grpc supported language: java, go, > c++ examples are provided; > - the interface is full typed. No need for marshaller and clients build in > different language can cooperate on the same cache; > > The second item is my preferred one beacuse it frees the developer from > data marshalling. > > What do you think about? > Sounds interesting? > Can you see any flaw? > > There's also a list of issues for the future [2], basically I would like > to investigate these questions: > How far this architecture can go? > Topology, events, queries... how many of the Infinispan features can be > fit in a grpc architecture? > > Thank you > Vittorio > > [1] https://github.com/rigazilla/ispn-grpc > [2] https://github.com/rigazilla/ispn-grpc/issues > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/315884f0/attachment-0001.html From sanne at infinispan.org Tue May 29 09:51:13 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 29 May 2018 14:51:13 +0100 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: On 29 May 2018 at 13:45, Vittorio Rigamonti wrote: > Thanks Adrian, > > of course there's a marshalling work under the cover and that is reflected > into the generated code (specially the accessor methods generated from the > oneof clause). > > My opinion is that on the client side this could be accepted, as long as > the API are well defined and documented: application developer can build an > adhoc decorator on the top if needed. The alternative to this is to develop > a protostream equivalent for each supported language and it doesn't seem > really feasible to me. > ?This might indeed be reasonable for some developers, some languages. Just please make sure it's not the only option, as many other developers will not expect to need a compiler at hand in various stages of the application lifecycle. For example when deploying a JPA model into an appserver, or just booting Hibernate in JavaSE as well, there is a strong expectation that we'll be able - at runtime - to inspect the listed Java POJOs via reflection and automatically generate whatever Infinispan will need. Perhaps a key differentiator is between invoking Infinispan APIs (RPC) vs defining the object models and related CODECs for keys, values, streams and query results? It might get a bit more fuzzy to differentiate them for custom functions but I guess we can draw a line somewhere. Thanks, Sanne > > On the server side (java only) the situation is different: protobuf is > optimized for streaming not for storing so probably a Protostream layer is > needed. > > On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor wrote: > >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant solution for >> exposing services. I'll have a look at your PoC soon. >> >> I feel there are some remarks that need to be made regarding gRPC. gRPC >> is just some nice cheesy topping on top of protobuf. Google's >> implementation of protobuf, to be more precise. >> It does not need handwritten marshallers, but the 'No need for >> marshaller' does not accurately describe it. Marshallers are needed and are >> generated under the cover by the library and so are the data objects and >> you are unfortunately forced to use them. That's both the good news and the >> bad news:) The whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to write those >> marshallers. But it starts to become a nuisance if you want to use your own >> data objects. >> There is also the ugliness and excessive memory footprint of the >> generated code, which is the reason Infinispan did not adopt the >> protobuf-java library although it did adopt protobuf as an encoding format. >> The Protostream library was created as an alternative implementation to >> solve the aforementioned problems with the generated code. It solves this >> by letting the user provide their own data objects. And for the marshallers >> it gives you two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated (loved). >> Protostream does not currently support service definitions right now but >> this is something I started to investigate recently after Galder asked me >> if I think it's doable. I think I'll only find out after I do it:) >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access Infinispan >> services through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in different >> languages? >> - can that clients interoperate on the same cache with the same data >> types? >> >> I came out with a small prototype that I would like to submit to you and >> on which I would like to gather your impressions. >> >> You can found the project here [1]: is a gRPC-based client/server >> architecture for Infinispan based on and EmbeddedCache, with very few >> features exposed atm. >> >> Currently the project is nothing more than a poc with the following >> interesting features: >> >> - client can be generated in all the grpc supported language: java, go, >> c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients build >> in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the developer from >> data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I would like >> to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features can be >> fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/84efbe35/attachment.html From vrigamon at redhat.com Tue May 29 10:35:57 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Tue, 29 May 2018 16:35:57 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: Thanks Galder, comments inline On Tue, May 29, 2018 at 10:35 AM, Galder Zamarreno wrote: > Hi all, > > @Vittorio, thanks a lot for working on this! > > Let me explain some of the background behind this effort so that we're all > on the same page: > > The biggest problem I see in our client/server architecture is the ability > to quickly deliver features/APIs across multiple language clients. Both > Vittorio and I have seen how long it takes to implement all the different > features available in Java client and port them to Node.js, C/C++/C#...etc. > This effort lead by Vittorio is trying to improve on that by having some of > that work done for us. Granted, not all of it will be done, but it should > give us some good foundations on which to build. > > One thing I mentioned to Vittorio is that he should investigate what the > performance impact of using gRPC is. This is crucial to decide whether to > take this forward or not. This should really have been done by now so that > other devs are aware of the cost in terms of latency and memory > consumption. As you can see from the first comment, there are already > concerns with its memory consumption. So, this needs to be done ASAP so > that we're aware of the consequences right away. > we need to define some scenarios to be sure to collect meaningful data. Well probably with memory we can do some quick estimation. > Also, when I looked at gRPC, I was considering having the base layer use > only bytes, and we'd build the marshallers/encoders...etc we need on top. > Maybe both approaches can be compared from the POV of performance. > > If gRPC performance is not up to scratch, we have the contacts to see if > things can be improved. > > Once again, as I mentioned to Vittorio separately, if we can't rely on > gRPC (or similar tool), it'd be nice to have just a C client (or a more > typesafe client that compiles into C, e.g. Rust) that uses protobuf > serialized messages and get any other language to be a wrapper of that. > This is possible with Node.js and Haskell for example. With Java this is > not currently an option since JNI is slow and cumbersome but maybe with > Project Panama [4] this won't be problem in the future. So maybe a Java (w/ > Netty) and C clients and the rest interfacing to them would be the way if > gRPC does not work out. > i did some experiments based on SWIG here: https://github.com/rigazilla/hotswig the c/wrapper architecture works quite well with the simple get/put use case, things become harder with events, queries... > > Cheers > > On Mon, May 28, 2018 at 4:50 PM Adrian Nistor wrote: > >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant solution for >> exposing services. I'll have a look at your PoC soon. >> >> I feel there are some remarks that need to be made regarding gRPC. gRPC >> is just some nice cheesy topping on top of protobuf. Google's >> implementation of protobuf, to be more precise. >> It does not need handwritten marshallers, but the 'No need for >> marshaller' does not accurately describe it. Marshallers are needed and are >> generated under the cover by the library and so are the data objects and >> you are unfortunately forced to use them. That's both the good news and the >> bad news:) The whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to write those >> marshallers. But it starts to become a nuisance if you want to use your own >> data objects. >> There is also the ugliness and excessive memory footprint of the >> generated code, which is the reason Infinispan did not adopt the >> protobuf-java library although it did adopt protobuf as an encoding format. >> The Protostream library was created as an alternative implementation to >> solve the aforementioned problems with the generated code. It solves this >> by letting the user provide their own data objects. And for the marshallers >> it gives you two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated (loved). >> Protostream does not currently support service definitions right now but >> this is something I started to investigate recently after Galder asked me >> if I think it's doable. I think I'll only find out after I do it:) >> >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access Infinispan >> services through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in different >> languages? >> - can that clients interoperate on the same cache with the same data >> types? >> >> I came out with a small prototype that I would like to submit to you and >> on which I would like to gather your impressions. >> >> You can found the project here [1]: is a gRPC-based client/server >> architecture for Infinispan based on and EmbeddedCache, with very few >> features exposed atm. >> >> Currently the project is nothing more than a poc with the following >> interesting features: >> >> - client can be generated in all the grpc supported language: java, go, >> c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients build >> in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the developer from >> data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I would like >> to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features can be >> fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/76d8f3ef/attachment-0001.html From emmanuel at hibernate.org Tue May 29 12:20:48 2018 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 29 May 2018 18:20:48 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> Right. Here we are talking about a gRPC representation of the client server interactions. Not the data schema stored in ISPN. In that model, the API is compiled by us and handed over as a package. > On 29 May 2018, at 15:51, Sanne Grinovero wrote: > > > >> On 29 May 2018 at 13:45, Vittorio Rigamonti wrote: >> Thanks Adrian, >> >> of course there's a marshalling work under the cover and that is reflected into the generated code (specially the accessor methods generated from the oneof clause). >> >> My opinion is that on the client side this could be accepted, as long as the API are well defined and documented: application developer can build an adhoc decorator on the top if needed. The alternative to this is to develop a protostream equivalent for each supported language and it doesn't seem really feasible to me. > > ?This might indeed be reasonable for some developers, some languages. > > Just please make sure it's not the only option, as many other developers will not expect to need a compiler at hand in various stages of the application lifecycle. > > For example when deploying a JPA model into an appserver, or just booting Hibernate in JavaSE as well, there is a strong expectation that we'll be able - at runtime - to inspect the listed Java POJOs via reflection and automatically generate whatever Infinispan will need. > > Perhaps a key differentiator is between invoking Infinispan APIs (RPC) vs defining the object models and related CODECs for keys, values, streams and query results? It might get a bit more fuzzy to differentiate them for custom functions but I guess we can draw a line somewhere. > > Thanks, > Sanne > > >> >> On the server side (java only) the situation is different: protobuf is optimized for streaming not for storing so probably a Protostream layer is needed. >> >> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor wrote: >>> Hi Vittorio, >>> thanks for exploring gRPC. It seems like a very elegant solution for exposing services. I'll have a look at your PoC soon. >>> >>> I feel there are some remarks that need to be made regarding gRPC. gRPC is just some nice cheesy topping on top of protobuf. Google's implementation of protobuf, to be more precise. >>> It does not need handwritten marshallers, but the 'No need for marshaller' does not accurately describe it. Marshallers are needed and are generated under the cover by the library and so are the data objects and you are unfortunately forced to use them. That's both the good news and the bad news:) The whole thing looks very promising and friendly for many uses cases, especially for demos and PoCs :))). Nobody wants to write those marshallers. But it starts to become a nuisance if you want to use your own data objects. >>> There is also the ugliness and excessive memory footprint of the generated code, which is the reason Infinispan did not adopt the protobuf-java library although it did adopt protobuf as an encoding format. >>> The Protostream library was created as an alternative implementation to solve the aforementioned problems with the generated code. It solves this by letting the user provide their own data objects. And for the marshallers it gives you two options: a) write the marshaller yourself (hated), b) annotated your data objects and the marshaller gets generated (loved). Protostream does not currently support service definitions right now but this is something I started to investigate recently after Galder asked me if I think it's doable. I think I'll only find out after I do it:) >>> >>> Adrian >>> >>> >>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>> Hi Infinispan developers, >>>> >>>> I'm working on a solution for developers who need to access Infinispan services through different programming languages. >>>> >>>> The focus is not on developing a full featured client, but rather discover the value and the limits of this approach. >>>> >>>> - is it possible to automatically generate useful clients in different languages? >>>> - can that clients interoperate on the same cache with the same data types? >>>> >>>> I came out with a small prototype that I would like to submit to you and on which I would like to gather your impressions. >>>> >>>> You can found the project here [1]: is a gRPC-based client/server architecture for Infinispan based on and EmbeddedCache, with very few features exposed atm. >>>> >>>> Currently the project is nothing more than a poc with the following interesting features: >>>> >>>> - client can be generated in all the grpc supported language: java, go, c++ examples are provided; >>>> - the interface is full typed. No need for marshaller and clients build in different language can cooperate on the same cache; >>>> >>>> The second item is my preferred one beacuse it frees the developer from data marshalling. >>>> >>>> What do you think about? >>>> Sounds interesting? >>>> Can you see any flaw? >>>> >>>> There's also a list of issues for the future [2], basically I would like to investigate these questions: >>>> How far this architecture can go? >>>> Topology, events, queries... how many of the Infinispan features can be fit in a grpc architecture? >>>> >>>> Thank you >>>> Vittorio >>>> >>>> [1] https://github.com/rigazilla/ispn-grpc >>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>> >>>> -- >>>> VITTORIO RIGAMONTI >>>> SENIOR SOFTWARE ENGINEER >>>> Red Hat >>>> >>>> Milan, Italy >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> >> -- >> VITTORIO RIGAMONTI >> SENIOR SOFTWARE ENGINEER >> Red Hat >> >> Milan, Italy >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/9c68a9b5/attachment.html From anistor at redhat.com Tue May 29 14:49:40 2018 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 29 May 2018 21:49:40 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> Message-ID: <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> Vittorio, a few remarks regarding your statement "...The alternative to this is to develop a protostream equivalent for each supported language and it doesn't seem really feasible to me." No way! That's a big misunderstanding. We do not need to re-implement the protostream library in C/C++/C# or any new supported language. Protostream is just for Java and it is compatible with Google's protobuf lib we already use in the other clients. We can continue using Google's protobuf lib for these clients, with or without gRPC. Protostream does not handle protobuf services as gRPC does, but we can add support for that with little effort. The real problem here is if we want to replace our hot rod invocation protocol with gRPC to save on the effort of implementing and maintaining hot rod in all those clients. I wonder why the obvious question is being avoided in this thread. Adrian On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: > Thanks Adrian, > > of course there's a marshalling work under the cover and that is > reflected into the generated code (specially the accessor methods > generated from the oneof clause). > > My opinion is that on the client side this could be accepted, as long > as the API are well defined and documented: application developer can > build an adhoc decorator on the top if needed. The alternative to this > is to develop a protostream equivalent for each supported language and > it doesn't seem really feasible to me. > > On the server side (java only) the situation is different: protobuf is > optimized for streaming not for storing so probably a Protostream > layer is needed. > > On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor > wrote: > > Hi Vittorio, > thanks for exploring gRPC. It seems like a very elegant solution > for exposing services. I'll have a look at your PoC soon. > > I feel there are some remarks that need to be made regarding gRPC. > gRPC is just some nice cheesy topping on top of protobuf. Google's > implementation of protobuf, to be more precise. > It does not need handwritten marshallers, but the 'No need for > marshaller' does not accurately describe it. Marshallers are > needed and are generated under the cover by the library and so are > the data objects and you are unfortunately forced to use them. > That's both the good news and the bad news:) The whole thing looks > very promising and friendly for many uses cases, especially for > demos and PoCs :))). Nobody wants to write those marshallers. But > it starts to become a nuisance if you want to use your own data > objects. > There is also the ugliness and excessive memory footprint of the > generated code, which is the reason Infinispan did not adopt the > protobuf-java library although it did adopt protobuf as an > encoding format. > The Protostream library was created as an alternative > implementation to solve the aforementioned problems with the > generated code. It solves this by letting the user provide their > own data objects. And for the marshallers it gives you two > options: a) write the marshaller yourself (hated), b) annotated > your data objects and the marshaller gets generated (loved). > Protostream does not currently support service definitions right > now but this is something I started to investigate recently after > Galder asked me if I think it's doable. I think I'll only find out > after I do it:) > > Adrian > > > On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access >> Infinispan services? through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in >> different languages? >> - can that clients interoperate on the same cache with the same >> data types? >> >> I came out with a small prototype that I would like to submit to >> you and on which I would like to gather your impressions. >> >> ?You can found the project here [1]: is a gRPC-based >> client/server architecture for Infinispan based on and >> EmbeddedCache, with very few features exposed atm. >> >> Currently the project is nothing more than a poc with the >> following interesting features: >> >> - client can be generated in all the grpc supported language: >> java, go, c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients >> build in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the >> developer from data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I >> would like to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features >> can be fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/fc6d5def/attachment-0001.html From anistor at redhat.com Tue May 29 14:59:13 2018 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 29 May 2018 21:59:13 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> Message-ID: <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> So you assume the two are separate, Emmanuel. So do I. But in the current PoC the user data model is directly referenced by the service model interface (KeyMsg and ValueMsg are oneofs listing all possible user application types???). I was assuming this hard dependency was there just to make things simple for the scope of the PoC. But let's not make this too simple because it will stop being useful. My expectation is to see a generic yet fully typed 'cache service' interface that does not depend on the key and value types that come from userland, using maybe 'google.protobuf.Any' or our own 'WrappedMessage' type instead. I'm not sure what to believe now because discussing my hopes and assumptions on the gRPC topic on zulip I think I understood the opposite is desired.? Vittorio, please comment on this. I'm still hoping we want to keep the service interface generic and separated from the user model. And if we do it, would you expect to be able to marshall the service call using gRPC lib and at the same time be able to marshall the user model using whatever other library? Would be nice but that seems to be a no-no with gRPC, or I did not search deep enough. I only looked at the java implementation anyway. It seems to be forcing you to go with protoc generated code and protobuf-java.jar all the way, for marshalling both the service and its arguments. And this goes infinitely deeper. If a service argument of type A has a nested field of type B and the marshaller for A is generated with protobuf-java then so is B. Using oneofs or type 'Any' still do not save you from this.? The only escape is to pretend the user payload is of type 'bytes'. At that point you are left to do your marshaling to and from bytes yourself. And you are also left with the question, what the heck is the contents of that byte array next time you unmarshall it, which is currently answered by WrappedMessage. So the more I look at gRPC it seems elegant for most purposes but lacking for ours. And again, as with protocol buffers, the wire protocol and the IDL are really nice. It is the implementation that is lacking, IMHO. I think to be really on the same page we should first make a clear statement of what we intend to achieve here in a bit more detail. Also, since this is not a clean slate effort, we should think right from the start what are the expected interactions with existing code base, like what are we willing to sacrifice. Somebody mention hot rod please! Adrian On 05/29/2018 07:20 PM, Emmanuel Bernard wrote: > Right. Here we are talking about a gRPC representation of the client > server interactions. Not the data schema stored in ISPN. In that > model, the API is compiled by us and handed over as a package. > > On 29 May 2018, at 15:51, Sanne Grinovero > wrote: > >> >> >> On 29 May 2018 at 13:45, Vittorio Rigamonti > > wrote: >> >> Thanks Adrian, >> >> of course there's a marshalling work under the cover and that is >> reflected into the generated code (specially the accessor methods >> generated from the oneof clause). >> >> My opinion is that on the client side this could be accepted, as >> long as the API are well defined and documented: application >> developer can build an adhoc decorator on the top if needed. The >> alternative to this is to develop a protostream equivalent for >> each supported language and it doesn't seem really feasible to me. >> >> >> ?This might indeed be reasonable for some developers, some languages. >> >> Just please make sure it's not the only option, as many other >> developers will not expect to need a compiler at hand in various >> stages of the application lifecycle. >> >> For example when deploying a JPA model into an appserver, or just >> booting Hibernate in JavaSE as well, there is a strong expectation >> that we'll be able - at runtime - to inspect the listed Java POJOs >> via reflection and automatically generate whatever Infinispan will need. >> >> Perhaps a key differentiator is between invoking Infinispan APIs >> (RPC) vs defining the object models and related CODECs for keys, >> values, streams and query results? It might get a bit more fuzzy to >> differentiate them for custom functions but I guess we can draw a >> line somewhere. >> >> Thanks, >> Sanne >> >> >> On the server side (java only) the situation is different: >> protobuf is optimized for streaming not for storing so probably a >> Protostream layer is needed. >> >> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >> > wrote: >> >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant >> solution for exposing services. I'll have a look at your PoC >> soon. >> >> I feel there are some remarks that need to be made regarding >> gRPC. gRPC is just some nice cheesy topping on top of >> protobuf. Google's implementation of protobuf, to be more >> precise. >> It does not need handwritten marshallers, but the 'No need >> for marshaller' does not accurately describe it. Marshallers >> are needed and are generated under the cover by the library >> and so are the data objects and you are unfortunately forced >> to use them. That's both the good news and the bad news:) The >> whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to >> write those marshallers. But it starts to become a nuisance >> if you want to use your own data objects. >> There is also the ugliness and excessive memory footprint of >> the generated code, which is the reason Infinispan did not >> adopt the protobuf-java library although it did adopt >> protobuf as an encoding format. >> The Protostream library was created as an alternative >> implementation to solve the aforementioned problems with the >> generated code. It solves this by letting the user provide >> their own data objects. And for the marshallers it gives you >> two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated >> (loved). Protostream does not currently support service >> definitions right now but this is something I started to >> investigate recently after Galder asked me if I think it's >> doable. I think I'll only find out after I do it:) >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>> Hi Infinispan developers, >>> >>> I'm working on a solution for developers who need to access >>> Infinispan services? through different programming languages. >>> >>> The focus is not on developing a full featured client, but >>> rather discover the value and the limits of this approach. >>> >>> - is it possible to automatically generate useful clients in >>> different languages? >>> - can that clients interoperate on the same cache with the >>> same data types? >>> >>> I came out with a small prototype that I would like to >>> submit to you and on which I would like to gather your >>> impressions. >>> >>> ?You can found the project here [1]: is a gRPC-based >>> client/server architecture for Infinispan based on and >>> EmbeddedCache, with very few features exposed atm. >>> >>> Currently the project is nothing more than a poc with the >>> following interesting features: >>> >>> - client can be generated in all the grpc supported >>> language: java, go, c++ examples are provided; >>> - the interface is full typed. No need for marshaller and >>> clients build in different language can cooperate on the >>> same cache; >>> >>> The second item is my preferred one beacuse it frees the >>> developer from data marshalling. >>> >>> What do you think about? >>> Sounds interesting? >>> Can you see any flaw? >>> >>> There's also a list of issues for the future [2], basically >>> I would like to investigate these questions: >>> How far this architecture can go? >>> Topology, events, queries... how many of the Infinispan >>> features can be fit in a grpc architecture? >>> >>> Thank you >>> Vittorio >>> >>> [1] https://github.com/rigazilla/ispn-grpc >>> >>> [2] https://github.com/rigazilla/ispn-grpc/issues >>> >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180529/8ffc1f58/attachment-0001.html From vrigamon at redhat.com Wed May 30 04:02:23 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Wed, 30 May 2018 10:02:23 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> Message-ID: Thanks Emmanuel, actually I started this thread just to describe what I did, I probably forgot an "An" at the beginning of the subject :) Vittorio On Tue, May 29, 2018 at 6:20 PM, Emmanuel Bernard wrote: > Right. Here we are talking about a gRPC representation of the client > server interactions. Not the data schema stored in ISPN. In that model, the > API is compiled by us and handed over as a package. > > On 29 May 2018, at 15:51, Sanne Grinovero wrote: > > > > On 29 May 2018 at 13:45, Vittorio Rigamonti wrote: > >> Thanks Adrian, >> >> of course there's a marshalling work under the cover and that is >> reflected into the generated code (specially the accessor methods generated >> from the oneof clause). >> >> My opinion is that on the client side this could be accepted, as long as >> the API are well defined and documented: application developer can build an >> adhoc decorator on the top if needed. The alternative to this is to develop >> a protostream equivalent for each supported language and it doesn't seem >> really feasible to me. >> > > ?This might indeed be reasonable for some developers, some languages. > > Just please make sure it's not the only option, as many other developers > will not expect to need a compiler at hand in various stages of the > application lifecycle. > > For example when deploying a JPA model into an appserver, or just booting > Hibernate in JavaSE as well, there is a strong expectation that we'll be > able - at runtime - to inspect the listed Java POJOs via reflection and > automatically generate whatever Infinispan will need. > > Perhaps a key differentiator is between invoking Infinispan APIs (RPC) vs > defining the object models and related CODECs for keys, values, streams and > query results? It might get a bit more fuzzy to differentiate them for > custom functions but I guess we can draw a line somewhere. > > Thanks, > Sanne > > > >> >> On the server side (java only) the situation is different: protobuf is >> optimized for streaming not for storing so probably a Protostream layer is >> needed. >> >> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >> wrote: >> >>> Hi Vittorio, >>> thanks for exploring gRPC. It seems like a very elegant solution for >>> exposing services. I'll have a look at your PoC soon. >>> >>> I feel there are some remarks that need to be made regarding gRPC. gRPC >>> is just some nice cheesy topping on top of protobuf. Google's >>> implementation of protobuf, to be more precise. >>> It does not need handwritten marshallers, but the 'No need for >>> marshaller' does not accurately describe it. Marshallers are needed and are >>> generated under the cover by the library and so are the data objects and >>> you are unfortunately forced to use them. That's both the good news and the >>> bad news:) The whole thing looks very promising and friendly for many uses >>> cases, especially for demos and PoCs :))). Nobody wants to write those >>> marshallers. But it starts to become a nuisance if you want to use your own >>> data objects. >>> There is also the ugliness and excessive memory footprint of the >>> generated code, which is the reason Infinispan did not adopt the >>> protobuf-java library although it did adopt protobuf as an encoding format. >>> The Protostream library was created as an alternative implementation to >>> solve the aforementioned problems with the generated code. It solves this >>> by letting the user provide their own data objects. And for the marshallers >>> it gives you two options: a) write the marshaller yourself (hated), b) >>> annotated your data objects and the marshaller gets generated (loved). >>> Protostream does not currently support service definitions right now but >>> this is something I started to investigate recently after Galder asked me >>> if I think it's doable. I think I'll only find out after I do it:) >>> >>> Adrian >>> >>> >>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>> >>> Hi Infinispan developers, >>> >>> I'm working on a solution for developers who need to access Infinispan >>> services through different programming languages. >>> >>> The focus is not on developing a full featured client, but rather >>> discover the value and the limits of this approach. >>> >>> - is it possible to automatically generate useful clients in different >>> languages? >>> - can that clients interoperate on the same cache with the same data >>> types? >>> >>> I came out with a small prototype that I would like to submit to you and >>> on which I would like to gather your impressions. >>> >>> You can found the project here [1]: is a gRPC-based client/server >>> architecture for Infinispan based on and EmbeddedCache, with very few >>> features exposed atm. >>> >>> Currently the project is nothing more than a poc with the following >>> interesting features: >>> >>> - client can be generated in all the grpc supported language: java, go, >>> c++ examples are provided; >>> - the interface is full typed. No need for marshaller and clients build >>> in different language can cooperate on the same cache; >>> >>> The second item is my preferred one beacuse it frees the developer from >>> data marshalling. >>> >>> What do you think about? >>> Sounds interesting? >>> Can you see any flaw? >>> >>> There's also a list of issues for the future [2], basically I would like >>> to investigate these questions: >>> How far this architecture can go? >>> Topology, events, queries... how many of the Infinispan features can be >>> fit in a grpc architecture? >>> >>> Thank you >>> Vittorio >>> >>> [1] https://github.com/rigazilla/ispn-grpc >>> [2] https://github.com/rigazilla/ispn-grpc/issues >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/ba5f5eb5/attachment.html From vrigamon at redhat.com Wed May 30 04:04:38 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Wed, 30 May 2018 10:04:38 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> Message-ID: On Tue, May 29, 2018 at 8:49 PM, Adrian Nistor wrote: > Vittorio, a few remarks regarding your statement "...The alternative to > this is to develop a protostream equivalent for each supported language and > it doesn't seem really feasible to me." > > No way! That's a big misunderstanding. We do not need to re-implement the > protostream library in C/C++/C# or any new supported language. > Protostream is just for Java and it is compatible with Google's protobuf > lib we already use in the other clients. We can continue using Google's > protobuf lib for these clients, with or without gRPC. > this is a solution that we could explore > Protostream does not handle protobuf services as gRPC does, but we can add > support for that with little effort. > > The real problem here is if we want to replace our hot rod invocation > protocol with gRPC to save on the effort of implementing and maintaining > hot rod in all those clients. I wonder why the obvious question is being > avoided in this thread. > > Adrian > > > On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: > > Thanks Adrian, > > of course there's a marshalling work under the cover and that is reflected > into the generated code (specially the accessor methods generated from the > oneof clause). > > My opinion is that on the client side this could be accepted, as long as > the API are well defined and documented: application developer can build an > adhoc decorator on the top if needed. The alternative to this is to develop > a protostream equivalent for each supported language and it doesn't seem > really feasible to me. > > On the server side (java only) the situation is different: protobuf is > optimized for streaming not for storing so probably a Protostream layer is > needed. > > On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor wrote: > >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant solution for >> exposing services. I'll have a look at your PoC soon. >> >> I feel there are some remarks that need to be made regarding gRPC. gRPC >> is just some nice cheesy topping on top of protobuf. Google's >> implementation of protobuf, to be more precise. >> It does not need handwritten marshallers, but the 'No need for >> marshaller' does not accurately describe it. Marshallers are needed and are >> generated under the cover by the library and so are the data objects and >> you are unfortunately forced to use them. That's both the good news and the >> bad news:) The whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to write those >> marshallers. But it starts to become a nuisance if you want to use your own >> data objects. >> There is also the ugliness and excessive memory footprint of the >> generated code, which is the reason Infinispan did not adopt the >> protobuf-java library although it did adopt protobuf as an encoding format. >> The Protostream library was created as an alternative implementation to >> solve the aforementioned problems with the generated code. It solves this >> by letting the user provide their own data objects. And for the marshallers >> it gives you two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated (loved). >> Protostream does not currently support service definitions right now but >> this is something I started to investigate recently after Galder asked me >> if I think it's doable. I think I'll only find out after I do it:) >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access Infinispan >> services through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in different >> languages? >> - can that clients interoperate on the same cache with the same data >> types? >> >> I came out with a small prototype that I would like to submit to you and >> on which I would like to gather your impressions. >> >> You can found the project here [1]: is a gRPC-based client/server >> architecture for Infinispan based on and EmbeddedCache, with very few >> features exposed atm. >> >> Currently the project is nothing more than a poc with the following >> interesting features: >> >> - client can be generated in all the grpc supported language: java, go, >> c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients build >> in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the developer from >> data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I would like >> to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features can be >> fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/ebf31f88/attachment-0001.html From vrigamon at redhat.com Wed May 30 04:34:58 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Wed, 30 May 2018 10:34:58 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> Message-ID: On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor wrote: > So you assume the two are separate, Emmanuel. So do I. > > But in the current PoC the user data model is directly referenced by the > service model interface (KeyMsg and ValueMsg are oneofs listing all > possible user application types???). I was assuming this hard dependency > was there just to make things simple for the scope of the PoC. But let's > not make this too simple because it will stop being useful. My expectation > is to see a generic yet fully typed 'cache service' interface that does not > depend on the key and value types that come from userland, using maybe > 'google.protobuf.Any' or our own 'WrappedMessage' type instead. I'm not > sure what to believe now because discussing my hopes and assumptions on the > gRPC topic on zulip I think I understood the opposite is desired. > Vittorio, please comment on this. > Yep that was my design choice. Well my first goal was to keep the framework language independent: to reach that I tried to define in grpc/protobuf as much as possible (that's why I didn't use the Any clause). Then I realized that with very little effort I could design a framework that works only with user data from the user side to the cache storage and I'd liked to investigate this, manly for two reasons: - from the user point of view I like the idea that I can found my objects types in the cache - the embeddedCache is transparently exposed but this is my 150 lines of code grpc server prototype, not a proposal for the ISPN object model. However it's ok to use it as starting point for a wider discussion > > I'm still hoping we want to keep the service interface generic and > separated from the user model. And if we do it, would you expect to be able > to marshall the service call using gRPC lib and at the same time be able to > marshall the user model using whatever other library? Would be nice but > that seems to be a no-no with gRPC, or I did not search deep enough. I only > looked at the java implementation anyway. It seems to be forcing you to go > with protoc generated code and protobuf-java.jar all the way, for > marshalling both the service and its arguments. And this goes infinitely > deeper. If a service argument of type A has a nested field of type B and > the marshaller for A is generated with protobuf-java then so is B. Using > oneofs or type 'Any' still do not save you from this. The only escape is > to pretend the user payload is of type 'bytes'. At that point you are left > to do your marshaling to and from bytes yourself. And you are also left > with the question, what the heck is the contents of that byte array next > time you unmarshall it, which is currently answered by WrappedMessage. > And indeed the "oneof" clause in my message definition solves the same problem solved by the WrappedMessage message: what I have to do with these bytes? Actually I'm not sure this is a gRPC limitation: if I receive a stream of bytes I also need some info on what I have to reconstruct.... I'm just guessing > > So the more I look at gRPC it seems elegant for most purposes but lacking > for ours. And again, as with protocol buffers, the wire protocol and the > IDL are really nice. It is the implementation that is lacking, IMHO. > > I think to be really on the same page we should first make a clear > statement of what we intend to achieve here in a bit more detail. Also, > since this is not a clean slate effort, we should think right from the > start what are the expected interactions with existing code base, like what > are we willing to sacrifice. Somebody mention hot rod please! > > Adrian > > > > On 05/29/2018 07:20 PM, Emmanuel Bernard wrote: > > Right. Here we are talking about a gRPC representation of the client > server interactions. Not the data schema stored in ISPN. In that model, the > API is compiled by us and handed over as a package. > > On 29 May 2018, at 15:51, Sanne Grinovero wrote: > > > > On 29 May 2018 at 13:45, Vittorio Rigamonti wrote: > >> Thanks Adrian, >> >> of course there's a marshalling work under the cover and that is >> reflected into the generated code (specially the accessor methods generated >> from the oneof clause). >> >> My opinion is that on the client side this could be accepted, as long as >> the API are well defined and documented: application developer can build an >> adhoc decorator on the top if needed. The alternative to this is to develop >> a protostream equivalent for each supported language and it doesn't seem >> really feasible to me. >> > > ?This might indeed be reasonable for some developers, some languages. > > Just please make sure it's not the only option, as many other developers > will not expect to need a compiler at hand in various stages of the > application lifecycle. > > For example when deploying a JPA model into an appserver, or just booting > Hibernate in JavaSE as well, there is a strong expectation that we'll be > able - at runtime - to inspect the listed Java POJOs via reflection and > automatically generate whatever Infinispan will need. > > Perhaps a key differentiator is between invoking Infinispan APIs (RPC) vs > defining the object models and related CODECs for keys, values, streams and > query results? It might get a bit more fuzzy to differentiate them for > custom functions but I guess we can draw a line somewhere. > > Thanks, > Sanne > > > >> >> On the server side (java only) the situation is different: protobuf is >> optimized for streaming not for storing so probably a Protostream layer is >> needed. >> >> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >> wrote: >> >>> Hi Vittorio, >>> thanks for exploring gRPC. It seems like a very elegant solution for >>> exposing services. I'll have a look at your PoC soon. >>> >>> I feel there are some remarks that need to be made regarding gRPC. gRPC >>> is just some nice cheesy topping on top of protobuf. Google's >>> implementation of protobuf, to be more precise. >>> It does not need handwritten marshallers, but the 'No need for >>> marshaller' does not accurately describe it. Marshallers are needed and are >>> generated under the cover by the library and so are the data objects and >>> you are unfortunately forced to use them. That's both the good news and the >>> bad news:) The whole thing looks very promising and friendly for many uses >>> cases, especially for demos and PoCs :))). Nobody wants to write those >>> marshallers. But it starts to become a nuisance if you want to use your own >>> data objects. >>> There is also the ugliness and excessive memory footprint of the >>> generated code, which is the reason Infinispan did not adopt the >>> protobuf-java library although it did adopt protobuf as an encoding format. >>> The Protostream library was created as an alternative implementation to >>> solve the aforementioned problems with the generated code. It solves this >>> by letting the user provide their own data objects. And for the marshallers >>> it gives you two options: a) write the marshaller yourself (hated), b) >>> annotated your data objects and the marshaller gets generated (loved). >>> Protostream does not currently support service definitions right now but >>> this is something I started to investigate recently after Galder asked me >>> if I think it's doable. I think I'll only find out after I do it:) >>> >>> Adrian >>> >>> >>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>> >>> Hi Infinispan developers, >>> >>> I'm working on a solution for developers who need to access Infinispan >>> services through different programming languages. >>> >>> The focus is not on developing a full featured client, but rather >>> discover the value and the limits of this approach. >>> >>> - is it possible to automatically generate useful clients in different >>> languages? >>> - can that clients interoperate on the same cache with the same data >>> types? >>> >>> I came out with a small prototype that I would like to submit to you and >>> on which I would like to gather your impressions. >>> >>> You can found the project here [1]: is a gRPC-based client/server >>> architecture for Infinispan based on and EmbeddedCache, with very few >>> features exposed atm. >>> >>> Currently the project is nothing more than a poc with the following >>> interesting features: >>> >>> - client can be generated in all the grpc supported language: java, go, >>> c++ examples are provided; >>> - the interface is full typed. No need for marshaller and clients build >>> in different language can cooperate on the same cache; >>> >>> The second item is my preferred one beacuse it frees the developer from >>> data marshalling. >>> >>> What do you think about? >>> Sounds interesting? >>> Can you see any flaw? >>> >>> There's also a list of issues for the future [2], basically I would like >>> to investigate these questions: >>> How far this architecture can go? >>> Topology, events, queries... how many of the Infinispan features can be >>> fit in a grpc architecture? >>> >>> Thank you >>> Vittorio >>> >>> [1] https://github.com/rigazilla/ispn-grpc >>> [2] https://github.com/rigazilla/ispn-grpc/issues >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/02c99ba5/attachment-0001.html From galder at redhat.com Wed May 30 05:16:01 2018 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 30 May 2018 11:16:01 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> Message-ID: On Tue, May 29, 2018 at 8:57 PM Adrian Nistor wrote: > Vittorio, a few remarks regarding your statement "...The alternative to > this is to develop a protostream equivalent for each supported language and > it doesn't seem really feasible to me." > > No way! That's a big misunderstanding. We do not need to re-implement the > protostream library in C/C++/C# or any new supported language. > Protostream is just for Java and it is compatible with Google's protobuf > lib we already use in the other clients. We can continue using Google's > protobuf lib for these clients, with or without gRPC. > Protostream does not handle protobuf services as gRPC does, but we can add > support for that with little effort. > > The real problem here is if we want to replace our hot rod invocation > protocol with gRPC to save on the effort of implementing and maintaining > hot rod in all those clients. I wonder why the obvious question is being > avoided in this thread. > ^ It is not being avoided. I stated it quite clearly when I replied but maybe not with enough detail. So, I said: > The biggest problem I see in our client/server architecture is the ability to quickly deliver features/APIs across multiple language clients. Both Vittorio and I have seen how long it takes to implement all the different features available in Java client and port them to Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to improve on that by having some of that work done for us. Granted, not all of it will be done, but it should give us some good foundations on which to build. To expand on it a bit further: the reason it takes us longer to get different features in is because each client implements its own network layer, parses the protocol and does type transformations (between byte[] and whatever the client expects). IMO, the most costly things there are getting the network layer right (from experience with Node.js, it has taken a while to do so) and parsing work (not only parsing itself, but doing it in a efficient way). Network layer also includes load balancing, failover, cluster failover...etc. >From past experience, transforming from byte[] to what the client expects has never really been very problematic for me. What's been difficult here is coming up with encoding architecture that Gustavo lead, whose aim was to improve on the initial compatibility mode. But, with that now clear, understood and proven to solve our issues, the rest in this area should be fairly straightforward IMO. Type transformation, once done, is a constant. As we add more Hot Rod operations, it's mostly the parsing that starts to become more work. Network can also become more work if instead of RPC commands you start supporting streams based commands. gRPC solves the network (FYI: with key as HTTP header and SubchannelPicker you can do hash-aware routing) and parsing for us. I don't see the need for it to solve our type transformations for us. If it does it, great, but does it support our compatibility requirements? (I had already told Vittorio to check Gustavo on this). Type transformation is a lower prio for me, network and parsing are more important. Hope this clarifies better my POV. Cheers > > > Adrian > > > On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: > > Thanks Adrian, > > of course there's a marshalling work under the cover and that is reflected > into the generated code (specially the accessor methods generated from the > oneof clause). > > My opinion is that on the client side this could be accepted, as long as > the API are well defined and documented: application developer can build an > adhoc decorator on the top if needed. The alternative to this is to develop > a protostream equivalent for each supported language and it doesn't seem > really feasible to me. > > On the server side (java only) the situation is different: protobuf is > optimized for streaming not for storing so probably a Protostream layer is > needed. > > On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor wrote: > >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant solution for >> exposing services. I'll have a look at your PoC soon. >> >> I feel there are some remarks that need to be made regarding gRPC. gRPC >> is just some nice cheesy topping on top of protobuf. Google's >> implementation of protobuf, to be more precise. >> It does not need handwritten marshallers, but the 'No need for >> marshaller' does not accurately describe it. Marshallers are needed and are >> generated under the cover by the library and so are the data objects and >> you are unfortunately forced to use them. That's both the good news and the >> bad news:) The whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to write those >> marshallers. But it starts to become a nuisance if you want to use your own >> data objects. >> There is also the ugliness and excessive memory footprint of the >> generated code, which is the reason Infinispan did not adopt the >> protobuf-java library although it did adopt protobuf as an encoding format. >> The Protostream library was created as an alternative implementation to >> solve the aforementioned problems with the generated code. It solves this >> by letting the user provide their own data objects. And for the marshallers >> it gives you two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated (loved). >> Protostream does not currently support service definitions right now but >> this is something I started to investigate recently after Galder asked me >> if I think it's doable. I think I'll only find out after I do it:) >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> >> Hi Infinispan developers, >> >> I'm working on a solution for developers who need to access Infinispan >> services through different programming languages. >> >> The focus is not on developing a full featured client, but rather >> discover the value and the limits of this approach. >> >> - is it possible to automatically generate useful clients in different >> languages? >> - can that clients interoperate on the same cache with the same data >> types? >> >> I came out with a small prototype that I would like to submit to you and >> on which I would like to gather your impressions. >> >> You can found the project here [1]: is a gRPC-based client/server >> architecture for Infinispan based on and EmbeddedCache, with very few >> features exposed atm. >> >> Currently the project is nothing more than a poc with the following >> interesting features: >> >> - client can be generated in all the grpc supported language: java, go, >> c++ examples are provided; >> - the interface is full typed. No need for marshaller and clients build >> in different language can cooperate on the same cache; >> >> The second item is my preferred one beacuse it frees the developer from >> data marshalling. >> >> What do you think about? >> Sounds interesting? >> Can you see any flaw? >> >> There's also a list of issues for the future [2], basically I would like >> to investigate these questions: >> How far this architecture can go? >> Topology, events, queries... how many of the Infinispan features can be >> fit in a grpc architecture? >> >> Thank you >> Vittorio >> >> [1] https://github.com/rigazilla/ispn-grpc >> [2] https://github.com/rigazilla/ispn-grpc/issues >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> >> >> _______________________________________________ >> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/46dcb426/attachment.html From anistor at redhat.com Wed May 30 05:56:37 2018 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 May 2018 12:56:37 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> Message-ID: <95ec6c27-73cf-1bd0-e8d1-216bb2bef42e@redhat.com> The oneof and WrappedMessage solve the same problem but in a different way. Oneof has the nasty effect that in ties the service model to the user data model. Even if it seems like just one more line of code to add when a new user type is introduced, it is one line of code in the wrong place because you'll have to re-generate the service. IE user run protoc again on OUR IDLs. Should a user do that? This coupling between the infinispan's service model and the user's data model bothers me. WrappedMessage is just a wrapper around an array of bytes + information regarding what message type or what scalar type is in there. Something very similar to a VARIANT [1]. The reason it is needed is explained here [2]. You are correct, this is not a gRPC limitation, it is a by-design protobuf protocol limitation, that was very thoughtfully introduced to reduce wire level bandwitdth for the common case where types are static. Unfortunately it leaves generic/dynamic types in mid-air. But it is fairly easy to solve, as you can see with WrappedMessage. At the time I introduced WrappedMessage we were using protobuf 2. protobuf 3 introduces type Any, which solves the issue in a similar way with WrappedMessage. The difference is Any seems to have been created to wrap either a plain byte[] or a message type that has been marshalled to a byte[]. No support for scalars in sight. Can we solve that? Sure, put a WrappedMessage inside that byte[] :)))) That is the reason I did not jump immediately at using Any and stayed with WrappedMessage. Can a 150 lines PoC be a proposal for the ISPN object model? No, but we need to explore the pain points of gRPC and protobuf that are relevant to our usage, and this thing with genericly typed services is one of them. I think we already have a good solution in sight, before giving up and going with byte[] for key and value as it was suggested earlier here. I can make a PR to the grpc PoC to show it by the end of the week. Adrian [1] https://en.wikipedia.org/wiki/Variant_type [2] https://developers.google.com/protocol-buffers/docs/techniques#streaming On 05/30/2018 11:34 AM, Vittorio Rigamonti wrote: > > > On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor > wrote: > > So you assume the two are separate, Emmanuel. So do I. > > But in the current PoC the user data model is directly referenced > by the service model interface (KeyMsg and ValueMsg are oneofs > listing all possible user application types???). I was assuming > this hard dependency was there just to make things simple for the > scope of the PoC. But let's not make this too simple because it > will stop being useful. My expectation is to see a generic yet > fully typed 'cache service' interface that does not depend on the > key and value types that come from userland, using maybe > 'google.protobuf.Any' or our own 'WrappedMessage' type instead. > I'm not sure what to believe now because discussing my hopes and > assumptions on the gRPC topic on zulip I think I understood the > opposite is desired.? Vittorio, please comment on this. > > > Yep that was my design choice. Well my first goal was to keep the > framework language independent: to reach that I tried to define in > grpc/protobuf as much as possible (that's why I didn't use the Any > clause). Then I realized that with very little effort I could design a > framework that works only with user data from the user side to the > cache storage and I'd? liked to investigate this, manly for two reasons: > > - from the user point of view I like the idea that I can found my > objects types in the cache > - the embeddedCache is transparently exposed > > but this is my 150 lines of code grpc server prototype, not a proposal > for the ISPN object model. However it's ok to use it as starting point > for a wider discussion > > > I'm still hoping we want to keep the service interface generic and > separated from the user model. And if we do it, would you expect > to be able to marshall the service call using gRPC lib and at the > same time be able to marshall the user model using whatever other > library? Would be nice but that seems to be a no-no with gRPC, or > I did not search deep enough. I only looked at the java > implementation anyway. It seems to be forcing you to go with > protoc generated code and protobuf-java.jar all the way, for > marshalling both the service and its arguments. And this goes > infinitely deeper. If a service argument of type A has a nested > field of type B and the marshaller for A is generated with > protobuf-java then so is B. Using oneofs or type 'Any' still do > not save you from this. The only escape is to pretend the user > payload is of type 'bytes'. At that point you are left to do your > marshaling to and from bytes yourself. And you are also left with > the question, what the heck is the contents of that byte array > next time you unmarshall it, which is currently answered by > WrappedMessage. > > And indeed the "oneof" clause in my message definition solves the same > problem solved by the WrappedMessage message: what I have to do with > these bytes? Actually I'm not sure this is a gRPC limitation: if I > receive a stream of bytes I also need some info on what I have to > reconstruct.... I'm just guessing > > > So the more I look at gRPC it seems elegant for most purposes but > lacking for ours. And again, as with protocol buffers, the wire > protocol and the IDL are really nice. It is the implementation > that is lacking, IMHO. > > I think to be really on the same page we should first make a clear > statement of what we intend to achieve here in a bit more detail. > Also, since this is not a clean slate effort, we should think > right from the start what are the expected interactions with > existing code base, like what are we willing to sacrifice. > Somebody mention hot rod please! > > Adrian > > > > On 05/29/2018 07:20 PM, Emmanuel Bernard wrote: >> Right. Here we are talking about a gRPC representation of the >> client server interactions. Not the data schema stored in ISPN. >> In that model, the API is compiled by us and handed over as a >> package. >> >> On 29 May 2018, at 15:51, Sanne Grinovero > > wrote: >> >>> >>> >>> On 29 May 2018 at 13:45, Vittorio Rigamonti >> > wrote: >>> >>> Thanks Adrian, >>> >>> of course there's a marshalling work under the cover and >>> that is reflected into the generated code (specially the >>> accessor methods generated from the oneof clause). >>> >>> My opinion is that on the client side this could be >>> accepted, as long as the API are well defined and >>> documented: application developer can build an adhoc >>> decorator on the top if needed. The alternative to this is >>> to develop a protostream equivalent for each supported >>> language and it doesn't seem really feasible to me. >>> >>> >>> ?This might indeed be reasonable for some developers, some >>> languages. >>> >>> Just please make sure it's not the only option, as many other >>> developers will not expect to need a compiler at hand in various >>> stages of the application lifecycle. >>> >>> For example when deploying a JPA model into an appserver, or >>> just booting Hibernate in JavaSE as well, there is a strong >>> expectation that we'll be able - at runtime - to inspect the >>> listed Java POJOs via reflection and automatically generate >>> whatever Infinispan will need. >>> >>> Perhaps a key differentiator is between invoking Infinispan APIs >>> (RPC) vs defining the object models and related CODECs for keys, >>> values, streams and query results? It might get a bit more fuzzy >>> to differentiate them for custom functions but I guess we can >>> draw a line somewhere. >>> >>> Thanks, >>> Sanne >>> >>> >>> On the server side (java only) the situation is different: >>> protobuf is optimized for streaming not for storing so >>> probably a Protostream layer is needed. >>> >>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>> > wrote: >>> >>> Hi Vittorio, >>> thanks for exploring gRPC. It seems like a very elegant >>> solution for exposing services. I'll have a look at your >>> PoC soon. >>> >>> I feel there are some remarks that need to be made >>> regarding gRPC. gRPC is just some nice cheesy topping on >>> top of protobuf. Google's implementation of protobuf, to >>> be more precise. >>> It does not need handwritten marshallers, but the 'No >>> need for marshaller' does not accurately describe it. >>> Marshallers are needed and are generated under the cover >>> by the library and so are the data objects and you are >>> unfortunately forced to use them. That's both the good >>> news and the bad news:) The whole thing looks very >>> promising and friendly for many uses cases, especially >>> for demos and PoCs :))). Nobody wants to write those >>> marshallers. But it starts to become a nuisance if you >>> want to use your own data objects. >>> There is also the ugliness and excessive memory >>> footprint of the generated code, which is the reason >>> Infinispan did not adopt the protobuf-java library >>> although it did adopt protobuf as an encoding format. >>> The Protostream library was created as an alternative >>> implementation to solve the aforementioned problems with >>> the generated code. It solves this by letting the user >>> provide their own data objects. And for the marshallers >>> it gives you two options: a) write the marshaller >>> yourself (hated), b) annotated your data objects and the >>> marshaller gets generated (loved). Protostream does not >>> currently support service definitions right now but this >>> is something I started to investigate recently after >>> Galder asked me if I think it's doable. I think I'll >>> only find out after I do it:) >>> >>> Adrian >>> >>> >>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>> Hi Infinispan developers, >>>> >>>> I'm working on a solution for developers who need to >>>> access Infinispan services through different >>>> programming languages. >>>> >>>> The focus is not on developing a full featured client, >>>> but rather discover the value and the limits of this >>>> approach. >>>> >>>> - is it possible to automatically generate useful >>>> clients in different languages? >>>> - can that clients interoperate on the same cache with >>>> the same data types? >>>> >>>> I came out with a small prototype that I would like to >>>> submit to you and on which I would like to gather your >>>> impressions. >>>> >>>> ?You can found the project here [1]: is a gRPC-based >>>> client/server architecture for Infinispan based on and >>>> EmbeddedCache, with very few features exposed atm. >>>> >>>> Currently the project is nothing more than a poc with >>>> the following interesting features: >>>> >>>> - client can be generated in all the grpc supported >>>> language: java, go, c++ examples are provided; >>>> - the interface is full typed. No need for marshaller >>>> and clients build in different language can cooperate >>>> on the same cache; >>>> >>>> The second item is my preferred one beacuse it frees >>>> the developer from data marshalling. >>>> >>>> What do you think about? >>>> Sounds interesting? >>>> Can you see any flaw? >>>> >>>> There's also a list of issues for the future [2], >>>> basically I would like to investigate these questions: >>>> How far this architecture can go? >>>> Topology, events, queries... how many of the Infinispan >>>> features can be fit in a grpc architecture? >>>> >>>> Thank you >>>> Vittorio >>>> >>>> [1] https://github.com/rigazilla/ispn-grpc >>>> >>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>> >>>> >>>> -- >>>> >>>> Vittorio Rigamonti >>>> >>>> Senior Software Engineer >>>> >>>> Red Hat >>>> >>>> >>>> >>>> Milan, Italy >>>> >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >>> >>> >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/4f872d6a/attachment-0001.html From gustavo at infinispan.org Wed May 30 06:22:39 2018 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Wed, 30 May 2018 11:22:39 +0100 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <95ec6c27-73cf-1bd0-e8d1-216bb2bef42e@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> <95ec6c27-73cf-1bd0-e8d1-216bb2bef42e@redhat.com> Message-ID: On Wed, May 30, 2018 at 10:56 AM, Adrian Nistor wrote: > The oneof and WrappedMessage solve the same problem but in a different way. > Oneof has the nasty effect that in ties the service model to the user data > model. > The user data model is only "static" at storage level (guided by configuration), and the user data can travel on the wire in any format the user wants [1] [1] https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/test/java/org/infinispan/client/hotrod/transcoding/DataFormatTest.java#L109 So better not to assume it will be marshalled and unmarshalled in a specific way. > Even if it seems like just one more line of code to add when a new user > type is introduced, it is one line of code in the wrong place because > you'll have to re-generate the service. IE user run protoc again on OUR > IDLs. Should a user do that? This coupling between the infinispan's service > model and the user's data model bothers me. > > WrappedMessage is just a wrapper around an array of bytes + information > regarding what message type or what scalar type is in there. Something very > similar to a VARIANT [1]. The reason it is needed is explained here [2]. > > You are correct, this is not a gRPC limitation, it is a by-design protobuf > protocol limitation, that was very thoughtfully introduced to reduce wire > level bandwitdth for the common case where types are static. Unfortunately > it leaves generic/dynamic types in mid-air. But it is fairly easy to solve, > as you can see with WrappedMessage. At the time I introduced WrappedMessage > we were using protobuf 2. > > protobuf 3 introduces type Any, which solves the issue in a similar way > with WrappedMessage. The difference is Any seems to have been created to > wrap either a plain byte[] or a message type that has been marshalled to a > byte[]. No support for scalars in sight. Can we solve that? Sure, put a > WrappedMessage inside that byte[] :)))) That is the reason I did not jump > immediately at using Any and stayed with WrappedMessage. > > Can a 150 lines PoC be a proposal for the ISPN object model? No, but we > need to explore the pain points of gRPC and protobuf that are relevant to > our usage, and this thing with genericly typed services is one of them. > I think we already have a good solution in sight, before giving up and > going with byte[] for key and value as it was suggested earlier here. I can > make a PR to the grpc PoC to show it by the end of the week. > > Adrian > > [1] https://en.wikipedia.org/wiki/Variant_type > [2] https://developers.google.com/protocol-buffers/docs/ > techniques#streaming > > > On 05/30/2018 11:34 AM, Vittorio Rigamonti wrote: > > > > On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor wrote: > >> So you assume the two are separate, Emmanuel. So do I. >> >> But in the current PoC the user data model is directly referenced by the >> service model interface (KeyMsg and ValueMsg are oneofs listing all >> possible user application types???). I was assuming this hard dependency >> was there just to make things simple for the scope of the PoC. But let's >> not make this too simple because it will stop being useful. My expectation >> is to see a generic yet fully typed 'cache service' interface that does not >> depend on the key and value types that come from userland, using maybe >> 'google.protobuf.Any' or our own 'WrappedMessage' type instead. I'm not >> sure what to believe now because discussing my hopes and assumptions on the >> gRPC topic on zulip I think I understood the opposite is desired. >> Vittorio, please comment on this. >> > > Yep that was my design choice. Well my first goal was to keep the > framework language independent: to reach that I tried to define in > grpc/protobuf as much as possible (that's why I didn't use the Any clause). > Then I realized that with very little effort I could design a framework > that works only with user data from the user side to the cache storage and > I'd liked to investigate this, manly for two reasons: > > - from the user point of view I like the idea that I can found my objects > types in the cache > - the embeddedCache is transparently exposed > > but this is my 150 lines of code grpc server prototype, not a proposal for > the ISPN object model. However it's ok to use it as starting point for a > wider discussion > > >> >> I'm still hoping we want to keep the service interface generic and >> separated from the user model. And if we do it, would you expect to be able >> to marshall the service call using gRPC lib and at the same time be able to >> marshall the user model using whatever other library? Would be nice but >> that seems to be a no-no with gRPC, or I did not search deep enough. I only >> looked at the java implementation anyway. It seems to be forcing you to go >> with protoc generated code and protobuf-java.jar all the way, for >> marshalling both the service and its arguments. And this goes infinitely >> deeper. If a service argument of type A has a nested field of type B and >> the marshaller for A is generated with protobuf-java then so is B. Using >> oneofs or type 'Any' still do not save you from this. The only escape is >> to pretend the user payload is of type 'bytes'. At that point you are left >> to do your marshaling to and from bytes yourself. And you are also left >> with the question, what the heck is the contents of that byte array next >> time you unmarshall it, which is currently answered by WrappedMessage. >> > > And indeed the "oneof" clause in my message definition solves the same > problem solved by the WrappedMessage message: what I have to do with these > bytes? Actually I'm not sure this is a gRPC limitation: if I receive a > stream of bytes I also need some info on what I have to reconstruct.... I'm > just guessing > >> >> So the more I look at gRPC it seems elegant for most purposes but lacking >> for ours. And again, as with protocol buffers, the wire protocol and the >> IDL are really nice. It is the implementation that is lacking, IMHO. >> >> I think to be really on the same page we should first make a clear >> statement of what we intend to achieve here in a bit more detail. Also, >> since this is not a clean slate effort, we should think right from the >> start what are the expected interactions with existing code base, like what >> are we willing to sacrifice. Somebody mention hot rod please! >> >> Adrian >> >> >> >> On 05/29/2018 07:20 PM, Emmanuel Bernard wrote: >> >> Right. Here we are talking about a gRPC representation of the client >> server interactions. Not the data schema stored in ISPN. In that model, the >> API is compiled by us and handed over as a package. >> >> On 29 May 2018, at 15:51, Sanne Grinovero wrote: >> >> >> >> On 29 May 2018 at 13:45, Vittorio Rigamonti wrote: >> >>> Thanks Adrian, >>> >>> of course there's a marshalling work under the cover and that is >>> reflected into the generated code (specially the accessor methods generated >>> from the oneof clause). >>> >>> My opinion is that on the client side this could be accepted, as long as >>> the API are well defined and documented: application developer can build an >>> adhoc decorator on the top if needed. The alternative to this is to develop >>> a protostream equivalent for each supported language and it doesn't seem >>> really feasible to me. >>> >> >> ?This might indeed be reasonable for some developers, some languages. >> >> Just please make sure it's not the only option, as many other developers >> will not expect to need a compiler at hand in various stages of the >> application lifecycle. >> >> For example when deploying a JPA model into an appserver, or just booting >> Hibernate in JavaSE as well, there is a strong expectation that we'll be >> able - at runtime - to inspect the listed Java POJOs via reflection and >> automatically generate whatever Infinispan will need. >> >> Perhaps a key differentiator is between invoking Infinispan APIs (RPC) vs >> defining the object models and related CODECs for keys, values, streams and >> query results? It might get a bit more fuzzy to differentiate them for >> custom functions but I guess we can draw a line somewhere. >> >> Thanks, >> Sanne >> >> >> >>> >>> On the server side (java only) the situation is different: protobuf is >>> optimized for streaming not for storing so probably a Protostream layer is >>> needed. >>> >>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>> wrote: >>> >>>> Hi Vittorio, >>>> thanks for exploring gRPC. It seems like a very elegant solution for >>>> exposing services. I'll have a look at your PoC soon. >>>> >>>> I feel there are some remarks that need to be made regarding gRPC. gRPC >>>> is just some nice cheesy topping on top of protobuf. Google's >>>> implementation of protobuf, to be more precise. >>>> It does not need handwritten marshallers, but the 'No need for >>>> marshaller' does not accurately describe it. Marshallers are needed and are >>>> generated under the cover by the library and so are the data objects and >>>> you are unfortunately forced to use them. That's both the good news and the >>>> bad news:) The whole thing looks very promising and friendly for many uses >>>> cases, especially for demos and PoCs :))). Nobody wants to write those >>>> marshallers. But it starts to become a nuisance if you want to use your own >>>> data objects. >>>> There is also the ugliness and excessive memory footprint of the >>>> generated code, which is the reason Infinispan did not adopt the >>>> protobuf-java library although it did adopt protobuf as an encoding format. >>>> The Protostream library was created as an alternative implementation to >>>> solve the aforementioned problems with the generated code. It solves this >>>> by letting the user provide their own data objects. And for the marshallers >>>> it gives you two options: a) write the marshaller yourself (hated), b) >>>> annotated your data objects and the marshaller gets generated (loved). >>>> Protostream does not currently support service definitions right now but >>>> this is something I started to investigate recently after Galder asked me >>>> if I think it's doable. I think I'll only find out after I do it:) >>>> >>>> Adrian >>>> >>>> >>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>> >>>> Hi Infinispan developers, >>>> >>>> I'm working on a solution for developers who need to access Infinispan >>>> services through different programming languages. >>>> >>>> The focus is not on developing a full featured client, but rather >>>> discover the value and the limits of this approach. >>>> >>>> - is it possible to automatically generate useful clients in different >>>> languages? >>>> - can that clients interoperate on the same cache with the same data >>>> types? >>>> >>>> I came out with a small prototype that I would like to submit to you >>>> and on which I would like to gather your impressions. >>>> >>>> You can found the project here [1]: is a gRPC-based client/server >>>> architecture for Infinispan based on and EmbeddedCache, with very few >>>> features exposed atm. >>>> >>>> Currently the project is nothing more than a poc with the following >>>> interesting features: >>>> >>>> - client can be generated in all the grpc supported language: java, go, >>>> c++ examples are provided; >>>> - the interface is full typed. No need for marshaller and clients build >>>> in different language can cooperate on the same cache; >>>> >>>> The second item is my preferred one beacuse it frees the developer from >>>> data marshalling. >>>> >>>> What do you think about? >>>> Sounds interesting? >>>> Can you see any flaw? >>>> >>>> There's also a list of issues for the future [2], basically I would >>>> like to investigate these questions: >>>> How far this architecture can go? >>>> Topology, events, queries... how many of the Infinispan features can be >>>> fit in a grpc architecture? >>>> >>>> Thank you >>>> Vittorio >>>> >>>> [1] https://github.com/rigazilla/ispn-grpc >>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>> >>>> -- >>>> >>>> Vittorio Rigamonti >>>> >>>> Senior Software Engineer >>>> >>>> Red Hat >>>> >>>> >>>> >>>> Milan, Italy >>>> >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>> >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > > > -- > > Vittorio Rigamonti > > Senior Software Engineer > > Red Hat > > > > Milan, Italy > > vrigamon at redhat.com > > irc: rigazilla > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/0585b3da/attachment-0001.html From anistor at redhat.com Wed May 30 06:31:29 2018 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 May 2018 13:31:29 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <90D7A540-1A86-48F6-8986-484CD5CC8398@hibernate.org> <553d5779-b32c-7fe3-96e1-85e07ec56ad4@redhat.com> <95ec6c27-73cf-1bd0-e8d1-216bb2bef42e@redhat.com> Message-ID: Fair point. That's why protobuf's Any has a type url inside, exactly for such flexibility : https://github.com/google/protobuf/blob/master/src/google/protobuf/any.proto#L150 Well, it's not a mime-type as per infinspan but close enough. On 05/30/2018 01:22 PM, Gustavo Fernandes wrote: > > On Wed, May 30, 2018 at 10:56 AM, Adrian Nistor > wrote: > > The oneof and WrappedMessage solve the same problem but in a > different way. > Oneof has the nasty effect that in ties the service model to the > user data model. > > > > The user data model is only "static" at storage level (guided by > configuration), and the user data can travel on the wire in any format > the user wants [1] > > [1] > https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/test/java/org/infinispan/client/hotrod/transcoding/DataFormatTest.java#L109 > > So better not to assume it will be marshalled and unmarshalled in a > specific way. > > Even if it seems like just one more line of code to add when a new > user type is introduced, it is one line of code in the wrong place > because you'll have to re-generate the service. IE user run protoc > again on OUR IDLs. Should a user do that? This coupling between > the infinispan's service model and the user's data model bothers me. > > WrappedMessage is just a wrapper around an array of bytes + > information regarding what message type or what scalar type is in > there. Something very similar to a VARIANT [1]. The reason it is > needed is explained here [2]. > > You are correct, this is not a gRPC limitation, it is a by-design > protobuf protocol limitation, that was very thoughtfully > introduced to reduce wire level bandwitdth for the common case > where types are static. Unfortunately it leaves generic/dynamic > types in mid-air. But it is fairly easy to solve, as you can see > with WrappedMessage. At the time I introduced WrappedMessage we > were using protobuf 2. > > protobuf 3 introduces type Any, which solves the issue in a > similar way with WrappedMessage. The difference is Any seems to > have been created to wrap either a plain byte[] or a message type > that has been marshalled to a byte[]. No support for scalars in > sight. Can we solve that? Sure, put a WrappedMessage inside that > byte[] :)))) That is the reason I did not jump immediately at > using Any and stayed with WrappedMessage. > > Can a 150 lines PoC be a proposal for the ISPN object model? No, > but we need to explore the pain points of gRPC and protobuf that > are relevant to our usage, and this thing with genericly typed > services is one of them. > I think we already have a good solution in sight, before giving up > and going with byte[] for key and value as it was suggested > earlier here. I can make a PR to the grpc PoC to show it by the > end of the week. > > Adrian > > [1] https://en.wikipedia.org/wiki/Variant_type > > [2] > https://developers.google.com/protocol-buffers/docs/techniques#streaming > > > > > On 05/30/2018 11:34 AM, Vittorio Rigamonti wrote: >> >> >> On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor >> > wrote: >> >> So you assume the two are separate, Emmanuel. So do I. >> >> But in the current PoC the user data model is directly >> referenced by the service model interface (KeyMsg and >> ValueMsg are oneofs listing all possible user application >> types???). I was assuming this hard dependency was there just >> to make things simple for the scope of the PoC. But let's not >> make this too simple because it will stop being useful. My >> expectation is to see a generic yet fully typed 'cache >> service' interface that does not depend on the key and value >> types that come from userland, using maybe >> 'google.protobuf.Any' or our own 'WrappedMessage' type >> instead. I'm not sure what to believe now because discussing >> my hopes and assumptions on the gRPC topic on zulip I think I >> understood the opposite is desired. Vittorio, please comment >> on this. >> >> >> Yep that was my design choice. Well my first goal was to keep the >> framework language independent: to reach that I tried to define >> in grpc/protobuf as much as possible (that's why I didn't use the >> Any clause). Then I realized that with very little effort I could >> design a framework that works only with user data from the user >> side to the cache storage and I'd? liked to investigate this, >> manly for two reasons: >> >> - from the user point of view I like the idea that I can found my >> objects types in the cache >> - the embeddedCache is transparently exposed >> >> but this is my 150 lines of code grpc server prototype, not a >> proposal for the ISPN object model. However it's ok to use it as >> starting point for a wider discussion >> >> >> I'm still hoping we want to keep the service interface >> generic and separated from the user model. And if we do it, >> would you expect to be able to marshall the service call >> using gRPC lib and at the same time be able to marshall the >> user model using whatever other library? Would be nice but >> that seems to be a no-no with gRPC, or I did not search deep >> enough. I only looked at the java implementation anyway. It >> seems to be forcing you to go with protoc generated code and >> protobuf-java.jar all the way, for marshalling both the >> service and its arguments. And this goes infinitely deeper. >> If a service argument of type A has a nested field of type B >> and the marshaller for A is generated with protobuf-java then >> so is B. Using oneofs or type 'Any' still do not save you >> from this.? The only escape is to pretend the user payload is >> of type 'bytes'. At that point you are left to do your >> marshaling to and from bytes yourself. And you are also left >> with the question, what the heck is the contents of that byte >> array next time you unmarshall it, which is currently >> answered by WrappedMessage. >> >> And indeed the "oneof" clause in my message definition solves the >> same problem solved by the WrappedMessage message: what I have to >> do with these bytes? Actually I'm not sure this is a gRPC >> limitation: if I receive a stream of bytes I also need some info >> on what I have to reconstruct.... I'm just guessing >> >> >> So the more I look at gRPC it seems elegant for most purposes >> but lacking for ours. And again, as with protocol buffers, >> the wire protocol and the IDL are really nice. It is the >> implementation that is lacking, IMHO. >> >> I think to be really on the same page we should first make a >> clear statement of what we intend to achieve here in a bit >> more detail. Also, since this is not a clean slate effort, we >> should think right from the start what are the expected >> interactions with existing code base, like what are we >> willing to sacrifice. Somebody mention hot rod please! >> >> Adrian >> >> >> >> On 05/29/2018 07:20 PM, Emmanuel Bernard wrote: >>> Right. Here we are talking about a gRPC representation of >>> the client server interactions. Not the data schema stored >>> in ISPN. In that model, the API is compiled by us and handed >>> over as a package. >>> >>> On 29 May 2018, at 15:51, Sanne Grinovero >>> > wrote: >>> >>>> >>>> >>>> On 29 May 2018 at 13:45, Vittorio Rigamonti >>>> > wrote: >>>> >>>> Thanks Adrian, >>>> >>>> of course there's a marshalling work under the cover >>>> and that is reflected into the generated code >>>> (specially the accessor methods generated from the >>>> oneof clause). >>>> >>>> My opinion is that on the client side this could be >>>> accepted, as long as the API are well defined and >>>> documented: application developer can build an adhoc >>>> decorator on the top if needed. The alternative to this >>>> is to develop a protostream equivalent for each >>>> supported language and it doesn't seem really feasible >>>> to me. >>>> >>>> >>>> ?This might indeed be reasonable for some developers, some >>>> languages. >>>> >>>> Just please make sure it's not the only option, as many >>>> other developers will not expect to need a compiler at hand >>>> in various stages of the application lifecycle. >>>> >>>> For example when deploying a JPA model into an appserver, >>>> or just booting Hibernate in JavaSE as well, there is a >>>> strong expectation that we'll be able - at runtime - to >>>> inspect the listed Java POJOs via reflection and >>>> automatically generate whatever Infinispan will need. >>>> >>>> Perhaps a key differentiator is between invoking Infinispan >>>> APIs (RPC) vs defining the object models and related CODECs >>>> for keys, values, streams and query results? It might get a >>>> bit more fuzzy to differentiate them for custom functions >>>> but I guess we can draw a line somewhere. >>>> >>>> Thanks, >>>> Sanne >>>> >>>> >>>> On the server side (java only) the situation is >>>> different: protobuf is optimized for streaming not for >>>> storing so probably a Protostream layer is needed. >>>> >>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>>> > wrote: >>>> >>>> Hi Vittorio, >>>> thanks for exploring gRPC. It seems like a very >>>> elegant solution for exposing services. I'll have a >>>> look at your PoC soon. >>>> >>>> I feel there are some remarks that need to be made >>>> regarding gRPC. gRPC is just some nice cheesy >>>> topping on top of protobuf. Google's implementation >>>> of protobuf, to be more precise. >>>> It does not need handwritten marshallers, but the >>>> 'No need for marshaller' does not accurately >>>> describe it. Marshallers are needed and are >>>> generated under the cover by the library and so are >>>> the data objects and you are unfortunately forced >>>> to use them. That's both the good news and the bad >>>> news:) The whole thing looks very promising and >>>> friendly for many uses cases, especially for demos >>>> and PoCs :))). Nobody wants to write those >>>> marshallers. But it starts to become a nuisance if >>>> you want to use your own data objects. >>>> There is also the ugliness and excessive memory >>>> footprint of the generated code, which is the >>>> reason Infinispan did not adopt the protobuf-java >>>> library although it did adopt protobuf as an >>>> encoding format. >>>> The Protostream library was created as an >>>> alternative implementation to solve the >>>> aforementioned problems with the generated code. It >>>> solves this by letting the user provide their own >>>> data objects. And for the marshallers it gives you >>>> two options: a) write the marshaller yourself >>>> (hated), b) annotated your data objects and the >>>> marshaller gets generated (loved). Protostream does >>>> not currently support service definitions right now >>>> but this is something I started to investigate >>>> recently after Galder asked me if I think it's >>>> doable. I think I'll only find out after I do it:) >>>> >>>> Adrian >>>> >>>> >>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>>> Hi Infinispan developers, >>>>> >>>>> I'm working on a solution for developers who need >>>>> to access Infinispan services through different >>>>> programming languages. >>>>> >>>>> The focus is not on developing a full featured >>>>> client, but rather discover the value and the >>>>> limits of this approach. >>>>> >>>>> - is it possible to automatically generate useful >>>>> clients in different languages? >>>>> - can that clients interoperate on the same cache >>>>> with the same data types? >>>>> >>>>> I came out with a small prototype that I would >>>>> like to submit to you and on which I would like to >>>>> gather your impressions. >>>>> >>>>> ?You can found the project here [1]: is a >>>>> gRPC-based client/server architecture for >>>>> Infinispan based on and EmbeddedCache, with very >>>>> few features exposed atm. >>>>> >>>>> Currently the project is nothing more than a poc >>>>> with the following interesting features: >>>>> >>>>> - client can be generated in all the grpc >>>>> supported language: java, go, c++ examples are >>>>> provided; >>>>> - the interface is full typed. No need for >>>>> marshaller and clients build in different language >>>>> can cooperate on the same cache; >>>>> >>>>> The second item is my preferred one beacuse it >>>>> frees the developer from data marshalling. >>>>> >>>>> What do you think about? >>>>> Sounds interesting? >>>>> Can you see any flaw? >>>>> >>>>> There's also a list of issues for the future [2], >>>>> basically I would like to investigate these questions: >>>>> How far this architecture can go? >>>>> Topology, events, queries... how many of the >>>>> Infinispan features can be fit in a grpc architecture? >>>>> >>>>> Thank you >>>>> Vittorio >>>>> >>>>> [1] https://github.com/rigazilla/ispn-grpc >>>>> >>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>>> >>>>> >>>>> -- >>>>> >>>>> Vittorio Rigamonti >>>>> >>>>> Senior Software Engineer >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> Milan, Italy >>>>> >>>>> vrigamon at redhat.com >>>>> >>>>> irc: rigazilla >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> Vittorio Rigamonti >>>> >>>> Senior Software Engineer >>>> >>>> Red Hat >>>> >>>> >>>> >>>> Milan, Italy >>>> >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/d034c807/attachment-0001.html From anistor at redhat.com Wed May 30 06:46:08 2018 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 May 2018 13:46:08 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> Message-ID: <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> Thanks for clarifying this Galder. Yes, the network layer is indeed the culprit and the purpose of this experiment. What is the approach you envision regarding the IDL? Should we strive for a pure IDL definition of the service? That could be an interesting approach that would make it possible for a third party to generate their own infinispan grpc client in any new language that we do not already offer support, just based on the IDL. And maybe using a different grpc implementation if they do not find suitable the one from google. I was not suggesting we should do type transformation or anything on the client side that would require an extra layer of code on top of what grpc generates for the client, so maybe a pure IDL based service definition would indeed be possible, without extra helpers. No type transformation, just type information. Exposing the type info that comes from the server would be enough, a lot better than dumbing everything down to a byte[]. Adrian On 05/30/2018 12:16 PM, Galder Zamarreno wrote: > On Tue, May 29, 2018 at 8:57 PM Adrian Nistor > wrote: > > Vittorio, a few remarks regarding your statement "...The > alternative to this is to develop a protostream equivalent for > each supported language and it doesn't seem really feasible to me." > > No way! That's a big misunderstanding. We do not need to > re-implement the protostream library in C/C++/C# or any new > supported language. > Protostream is just for Java and it is compatible with Google's > protobuf lib we already use in the other clients. We can continue > using Google's protobuf lib for these clients, with or without gRPC. > Protostream does not handle protobuf services as gRPC does, but we > can add support for that with little effort. > > The real problem here is if we want to replace our hot rod > invocation protocol with gRPC to save on the effort of > implementing and maintaining hot rod in all those clients. I > wonder why the obvious question is being avoided in this thread. > > > ^ It is not being avoided. I stated it quite clearly when I replied > but maybe not with enough detail. So, I said: > > >?The biggest problem I see in our client/server architecture is the > ability to quickly deliver features/APIs across multiple language > clients. Both Vittorio and I have seen how long it takes to implement > all the different features available in Java client and port them to > Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to > improve on that by having some of that work done for us. Granted, not > all of it will be done, but it should give us some good foundations on > which to build. > > To expand on it a bit further: the reason it takes us longer to get > different features in is because each client implements its own > network layer, parses the protocol and does type transformations > (between byte[] and whatever the client expects). > > IMO, the most costly things there are getting the network layer right > (from experience with Node.js, it has taken a while to do so) and > parsing work (not only parsing itself, but doing it in a efficient > way). Network layer also includes load balancing, failover, cluster > failover...etc. > > From past experience, transforming from byte[] to what the client > expects has never really been very problematic for me. What's been > difficult here is coming up with encoding architecture that Gustavo > lead, whose aim was to improve on the initial compatibility mode. But, > with that now clear, understood and proven to solve our issues, the > rest in this area should be fairly straightforward IMO. > > Type transformation, once done, is a constant. As we add more Hot Rod > operations, it's mostly the parsing that starts to become more work. > Network can also become more work if instead of RPC commands you start > supporting streams based commands. > > gRPC solves the network (FYI: with key as HTTP header and > SubchannelPicker you can do hash-aware routing) and parsing for us. I > don't see the need for it to solve our type transformations for us. If > it does it, great, but does it support our compatibility requirements? > (I had already told Vittorio to check Gustavo on this). Type > transformation is a lower prio for me, network and parsing are more > important. > > Hope this clarifies better my POV. > > Cheers > > > > Adrian > > > On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >> Thanks Adrian, >> >> of course there's a marshalling work under the cover and that is >> reflected into the generated code (specially the accessor methods >> generated from the oneof clause). >> >> My opinion is that on the client side this could be accepted, as >> long as the API are well defined and documented: application >> developer can build an adhoc decorator on the top if needed. The >> alternative to this is to develop a protostream equivalent for >> each supported language and it doesn't seem really feasible to me. >> >> On the server side (java only) the situation is different: >> protobuf is optimized for streaming not for storing so probably a >> Protostream layer is needed. >> >> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >> > wrote: >> >> Hi Vittorio, >> thanks for exploring gRPC. It seems like a very elegant >> solution for exposing services. I'll have a look at your PoC >> soon. >> >> I feel there are some remarks that need to be made regarding >> gRPC. gRPC is just some nice cheesy topping on top of >> protobuf. Google's implementation of protobuf, to be more >> precise. >> It does not need handwritten marshallers, but the 'No need >> for marshaller' does not accurately describe it. Marshallers >> are needed and are generated under the cover by the library >> and so are the data objects and you are unfortunately forced >> to use them. That's both the good news and the bad news:) The >> whole thing looks very promising and friendly for many uses >> cases, especially for demos and PoCs :))). Nobody wants to >> write those marshallers. But it starts to become a nuisance >> if you want to use your own data objects. >> There is also the ugliness and excessive memory footprint of >> the generated code, which is the reason Infinispan did not >> adopt the protobuf-java library although it did adopt >> protobuf as an encoding format. >> The Protostream library was created as an alternative >> implementation to solve the aforementioned problems with the >> generated code. It solves this by letting the user provide >> their own data objects. And for the marshallers it gives you >> two options: a) write the marshaller yourself (hated), b) >> annotated your data objects and the marshaller gets generated >> (loved). Protostream does not currently support service >> definitions right now but this is something I started to >> investigate recently after Galder asked me if I think it's >> doable. I think I'll only find out after I do it:) >> >> Adrian >> >> >> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>> Hi Infinispan developers, >>> >>> I'm working on a solution for developers who need to access >>> Infinispan services? through different programming languages. >>> >>> The focus is not on developing a full featured client, but >>> rather discover the value and the limits of this approach. >>> >>> - is it possible to automatically generate useful clients in >>> different languages? >>> - can that clients interoperate on the same cache with the >>> same data types? >>> >>> I came out with a small prototype that I would like to >>> submit to you and on which I would like to gather your >>> impressions. >>> >>> ?You can found the project here [1]: is a gRPC-based >>> client/server architecture for Infinispan based on and >>> EmbeddedCache, with very few features exposed atm. >>> >>> Currently the project is nothing more than a poc with the >>> following interesting features: >>> >>> - client can be generated in all the grpc supported >>> language: java, go, c++ examples are provided; >>> - the interface is full typed. No need for marshaller and >>> clients build in different language can cooperate on the >>> same cache; >>> >>> The second item is my preferred one beacuse it frees the >>> developer from data marshalling. >>> >>> What do you think about? >>> Sounds interesting? >>> Can you see any flaw? >>> >>> There's also a list of issues for the future [2], basically >>> I would like to investigate these questions: >>> How far this architecture can go? >>> Topology, events, queries... how many of the Infinispan >>> features can be fit in a grpc architecture? >>> >>> Thank you >>> Vittorio >>> >>> [1] https://github.com/rigazilla/ispn-grpc >>> [2] https://github.com/rigazilla/ispn-grpc/issues >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> -- >> >> Vittorio Rigamonti >> >> Senior Software Engineer >> >> Red Hat >> >> >> >> Milan, Italy >> >> vrigamon at redhat.com >> >> irc: rigazilla >> >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/3ea34fb9/attachment-0001.html From rvansa at redhat.com Wed May 30 07:47:45 2018 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 30 May 2018 13:47:45 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> Message-ID: <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> On 05/30/2018 12:46 PM, Adrian Nistor wrote: > Thanks for clarifying this Galder. > Yes, the network layer is indeed the culprit and the purpose of this > experiment. > > What is the approach you envision regarding the IDL? Should we strive > for a pure IDL definition of the service? That could be an interesting > approach that would make it possible for a third party to generate > their own infinispan grpc client in any new language that we do not > already offer support, just based on the IDL. And maybe using a > different grpc implementation if they do not find suitable the one > from google. > > I was not suggesting we should do type transformation or anything on > the client side that would require an extra layer of code on top of > what grpc generates for the client, so maybe a pure IDL based service > definition would indeed be possible, without extra helpers. No type > transformation, just type information. Exposing the type info that > comes from the server would be enough, a lot better than dumbing > everything down to a byte[]. I may be wrong but key transformation on client is necessary for correct hash-aware routing, isn't it? We need to get byte array for each key and apply murmur hash there (IIUC even when we use protobuf as the storage format, segment is based on the raw protobuf bytes, right?). Radim > > Adrian > > On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor > > wrote: >> >> Vittorio, a few remarks regarding your statement "...The >> alternative to this is to develop a protostream equivalent for >> each supported language and it doesn't seem really feasible to me." >> >> No way! That's a big misunderstanding. We do not need to >> re-implement the protostream library in C/C++/C# or any new >> supported language. >> Protostream is just for Java and it is compatible with Google's >> protobuf lib we already use in the other clients. We can continue >> using Google's protobuf lib for these clients, with or without gRPC. >> Protostream does not handle protobuf services as gRPC does, but >> we can add support for that with little effort. >> >> The real problem here is if we want to replace our hot rod >> invocation protocol with gRPC to save on the effort of >> implementing and maintaining hot rod in all those clients. I >> wonder why the obvious question is being avoided in this thread. >> >> >> ^ It is not being avoided. I stated it quite clearly when I replied >> but maybe not with enough detail. So, I said: >> >> >?The biggest problem I see in our client/server architecture is the >> ability to quickly deliver features/APIs across multiple language >> clients. Both Vittorio and I have seen how long it takes to implement >> all the different features available in Java client and port them to >> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >> improve on that by having some of that work done for us. Granted, not >> all of it will be done, but it should give us some good foundations >> on which to build. >> >> To expand on it a bit further: the reason it takes us longer to get >> different features in is because each client implements its own >> network layer, parses the protocol and does type transformations >> (between byte[] and whatever the client expects). >> >> IMO, the most costly things there are getting the network layer right >> (from experience with Node.js, it has taken a while to do so) and >> parsing work (not only parsing itself, but doing it in a efficient >> way). Network layer also includes load balancing, failover, cluster >> failover...etc. >> >> From past experience, transforming from byte[] to what the client >> expects has never really been very problematic for me. What's been >> difficult here is coming up with encoding architecture that Gustavo >> lead, whose aim was to improve on the initial compatibility mode. >> But, with that now clear, understood and proven to solve our issues, >> the rest in this area should be fairly straightforward IMO. >> >> Type transformation, once done, is a constant. As we add more Hot Rod >> operations, it's mostly the parsing that starts to become more work. >> Network can also become more work if instead of RPC commands you >> start supporting streams based commands. >> >> gRPC solves the network (FYI: with key as HTTP header and >> SubchannelPicker you can do hash-aware routing) and parsing for us. I >> don't see the need for it to solve our type transformations for us. >> If it does it, great, but does it support our compatibility >> requirements? (I had already told Vittorio to check Gustavo on this). >> Type transformation is a lower prio for me, network and parsing are >> more important. >> >> Hope this clarifies better my POV. >> >> Cheers >> >> >> >> Adrian >> >> >> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >>> Thanks Adrian, >>> >>> of course there's a marshalling work under the cover and that is >>> reflected into the generated code (specially the accessor >>> methods generated from the oneof clause). >>> >>> My opinion is that on the client side this could be accepted, as >>> long as the API are well defined and documented: application >>> developer can build an adhoc decorator on the top if needed. The >>> alternative to this is to develop a protostream equivalent for >>> each supported language and it doesn't seem really feasible to me. >>> >>> On the server side (java only) the situation is different: >>> protobuf is optimized for streaming not for storing so probably >>> a Protostream layer is needed. >>> >>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>> > wrote: >>> >>> Hi Vittorio, >>> thanks for exploring gRPC. It seems like a very elegant >>> solution for exposing services. I'll have a look at your PoC >>> soon. >>> >>> I feel there are some remarks that need to be made regarding >>> gRPC. gRPC is just some nice cheesy topping on top of >>> protobuf. Google's implementation of protobuf, to be more >>> precise. >>> It does not need handwritten marshallers, but the 'No need >>> for marshaller' does not accurately describe it. Marshallers >>> are needed and are generated under the cover by the library >>> and so are the data objects and you are unfortunately forced >>> to use them. That's both the good news and the bad news:) >>> The whole thing looks very promising and friendly for many >>> uses cases, especially for demos and PoCs :))). Nobody wants >>> to write those marshallers. But it starts to become a >>> nuisance if you want to use your own data objects. >>> There is also the ugliness and excessive memory footprint of >>> the generated code, which is the reason Infinispan did not >>> adopt the protobuf-java library although it did adopt >>> protobuf as an encoding format. >>> The Protostream library was created as an alternative >>> implementation to solve the aforementioned problems with the >>> generated code. It solves this by letting the user provide >>> their own data objects. And for the marshallers it gives you >>> two options: a) write the marshaller yourself (hated), b) >>> annotated your data objects and the marshaller gets >>> generated (loved). Protostream does not currently support >>> service definitions right now but this is something I >>> started to investigate recently after Galder asked me if I >>> think it's doable. I think I'll only find out after I do it:) >>> >>> Adrian >>> >>> >>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>> Hi Infinispan developers, >>>> >>>> I'm working on a solution for developers who need to access >>>> Infinispan services? through different programming languages. >>>> >>>> The focus is not on developing a full featured client, but >>>> rather discover the value and the limits of this approach. >>>> >>>> - is it possible to automatically generate useful clients >>>> in different languages? >>>> - can that clients interoperate on the same cache with the >>>> same data types? >>>> >>>> I came out with a small prototype that I would like to >>>> submit to you and on which I would like to gather your >>>> impressions. >>>> >>>> ?You can found the project here [1]: is a gRPC-based >>>> client/server architecture for Infinispan based on and >>>> EmbeddedCache, with very few features exposed atm. >>>> >>>> Currently the project is nothing more than a poc with the >>>> following interesting features: >>>> >>>> - client can be generated in all the grpc supported >>>> language: java, go, c++ examples are provided; >>>> - the interface is full typed. No need for marshaller and >>>> clients build in different language can cooperate on the >>>> same cache; >>>> >>>> The second item is my preferred one beacuse it frees the >>>> developer from data marshalling. >>>> >>>> What do you think about? >>>> Sounds interesting? >>>> Can you see any flaw? >>>> >>>> There's also a list of issues for the future [2], basically >>>> I would like to investigate these questions: >>>> How far this architecture can go? >>>> Topology, events, queries... how many of the Infinispan >>>> features can be fit in a grpc architecture? >>>> >>>> Thank you >>>> Vittorio >>>> >>>> [1] https://github.com/rigazilla/ispn-grpc >>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>> >>>> -- >>>> >>>> Vittorio Rigamonti >>>> >>>> Senior Software Engineer >>>> >>>> Red Hat >>>> >>>> >>>> >>>> Milan, Italy >>>> >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> >>> >>> -- >>> >>> Vittorio Rigamonti >>> >>> Senior Software Engineer >>> >>> Red Hat >>> >>> >>> >>> Milan, Italy >>> >>> vrigamon at redhat.com >>> >>> irc: rigazilla >>> >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From emmanuel at hibernate.org Wed May 30 08:17:20 2018 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 30 May 2018 14:17:20 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> Message-ID: <20180530121720.GB38319@hibernate.org> On Wed 18-05-30 11:16, Galder Zamarreno wrote: >On Tue, May 29, 2018 at 8:57 PM Adrian Nistor wrote: > >> Vittorio, a few remarks regarding your statement "...The alternative to >> this is to develop a protostream equivalent for each supported language and >> it doesn't seem really feasible to me." >> >> No way! That's a big misunderstanding. We do not need to re-implement the >> protostream library in C/C++/C# or any new supported language. >> Protostream is just for Java and it is compatible with Google's protobuf >> lib we already use in the other clients. We can continue using Google's >> protobuf lib for these clients, with or without gRPC. >> Protostream does not handle protobuf services as gRPC does, but we can add >> support for that with little effort. >> >> The real problem here is if we want to replace our hot rod invocation >> protocol with gRPC to save on the effort of implementing and maintaining >> hot rod in all those clients. I wonder why the obvious question is being >> avoided in this thread. >> > >^ It is not being avoided. I stated it quite clearly when I replied but >maybe not with enough detail. So, I said: > >> The biggest problem I see in our client/server architecture is the >ability to quickly deliver features/APIs across multiple language clients. >Both Vittorio and I have seen how long it takes to implement all the >different features available in Java client and port them to Node.js, >C/C++/C#...etc. This effort lead by Vittorio is trying to improve on that >by having some of that work done for us. Granted, not all of it will be >done, but it should give us some good foundations on which to build. > >To expand on it a bit further: the reason it takes us longer to get >different features in is because each client implements its own network >layer, parses the protocol and does type transformations (between byte[] >and whatever the client expects). > >IMO, the most costly things there are getting the network layer right (from >experience with Node.js, it has taken a while to do so) and parsing work >(not only parsing itself, but doing it in a efficient way). Network layer >also includes load balancing, failover, cluster failover...etc. > >>From past experience, transforming from byte[] to what the client expects >has never really been very problematic for me. What's been difficult here >is coming up with encoding architecture that Gustavo lead, whose aim was to >improve on the initial compatibility mode. But, with that now clear, >understood and proven to solve our issues, the rest in this area should be >fairly straightforward IMO. > >Type transformation, once done, is a constant. As we add more Hot Rod >operations, it's mostly the parsing that starts to become more work. >Network can also become more work if instead of RPC commands you start >supporting streams based commands. > >gRPC solves the network (FYI: with key as HTTP header and SubchannelPicker >you can do hash-aware routing) and parsing for us. I don't see the need for >it to solve our type transformations for us. If it does it, great, but does >it support our compatibility requirements? (I had already told Vittorio to >check Gustavo on this). Type transformation is a lower prio for me, network >and parsing are more important. > >Hope this clarifies better my POV. I think I had an internal view a bit different of the project goal so to clarify. Who will do the hash aware connection to the grid? Each client (i.e. manual coding for each platform work). Or a generic C/Rust/Assembly client that acts as a gRPC server? What of the following architecture is closer to what each of you is saying: A |--------------- Client ---------------------| |-- server --| Clt RT <-- inter process --> Generic C client <--- HR ------> Data Grid B |--------------- Client --------------------| |-- server --| Clt RT <-- gRPC call --> Generic C client <--- HR ------> Data Grid C |--------------- Client ------------------| |-- server --| Clt RT <-- inter process --> Generic C clt <--- gRPC -----> Data Grid D |--------------- Client -----------------| |-- server --| Clt RT <-- gRPC call --> Generic C clt <--- gRPC ----> Data Grid E |--------------- Client ---------------------| |-- server --| Clt RT gRPC + manual coding (hash-aware etc) <-- gRPC --> Data Grid F Yet another alien My understanding is that you guys are talking about E which still leaves a lot of polyglot bugs if you ask me. Emmanuel From anistor at redhat.com Wed May 30 08:26:51 2018 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 May 2018 15:26:51 +0300 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> Message-ID: <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Yest, the client needs that hash but that does not necessarily mean it has to compute it itself. The hash should be applied to the storage format which might be different from the format the client sees. So hash computation could be done on the server, just a thought. On 05/30/2018 02:47 PM, Radim Vansa wrote: > On 05/30/2018 12:46 PM, Adrian Nistor wrote: >> Thanks for clarifying this Galder. >> Yes, the network layer is indeed the culprit and the purpose of this >> experiment. >> >> What is the approach you envision regarding the IDL? Should we strive >> for a pure IDL definition of the service? That could be an interesting >> approach that would make it possible for a third party to generate >> their own infinispan grpc client in any new language that we do not >> already offer support, just based on the IDL. And maybe using a >> different grpc implementation if they do not find suitable the one >> from google. >> >> I was not suggesting we should do type transformation or anything on >> the client side that would require an extra layer of code on top of >> what grpc generates for the client, so maybe a pure IDL based service >> definition would indeed be possible, without extra helpers. No type >> transformation, just type information. Exposing the type info that >> comes from the server would be enough, a lot better than dumbing >> everything down to a byte[]. > I may be wrong but key transformation on client is necessary for correct > hash-aware routing, isn't it? We need to get byte array for each key and > apply murmur hash there (IIUC even when we use protobuf as the storage > format, segment is based on the raw protobuf bytes, right?). > > Radim > >> Adrian >> >> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >> > wrote: >>> >>> Vittorio, a few remarks regarding your statement "...The >>> alternative to this is to develop a protostream equivalent for >>> each supported language and it doesn't seem really feasible to me." >>> >>> No way! That's a big misunderstanding. We do not need to >>> re-implement the protostream library in C/C++/C# or any new >>> supported language. >>> Protostream is just for Java and it is compatible with Google's >>> protobuf lib we already use in the other clients. We can continue >>> using Google's protobuf lib for these clients, with or without gRPC. >>> Protostream does not handle protobuf services as gRPC does, but >>> we can add support for that with little effort. >>> >>> The real problem here is if we want to replace our hot rod >>> invocation protocol with gRPC to save on the effort of >>> implementing and maintaining hot rod in all those clients. I >>> wonder why the obvious question is being avoided in this thread. >>> >>> >>> ^ It is not being avoided. I stated it quite clearly when I replied >>> but maybe not with enough detail. So, I said: >>> >>>> ?The biggest problem I see in our client/server architecture is the >>> ability to quickly deliver features/APIs across multiple language >>> clients. Both Vittorio and I have seen how long it takes to implement >>> all the different features available in Java client and port them to >>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >>> improve on that by having some of that work done for us. Granted, not >>> all of it will be done, but it should give us some good foundations >>> on which to build. >>> >>> To expand on it a bit further: the reason it takes us longer to get >>> different features in is because each client implements its own >>> network layer, parses the protocol and does type transformations >>> (between byte[] and whatever the client expects). >>> >>> IMO, the most costly things there are getting the network layer right >>> (from experience with Node.js, it has taken a while to do so) and >>> parsing work (not only parsing itself, but doing it in a efficient >>> way). Network layer also includes load balancing, failover, cluster >>> failover...etc. >>> >>> From past experience, transforming from byte[] to what the client >>> expects has never really been very problematic for me. What's been >>> difficult here is coming up with encoding architecture that Gustavo >>> lead, whose aim was to improve on the initial compatibility mode. >>> But, with that now clear, understood and proven to solve our issues, >>> the rest in this area should be fairly straightforward IMO. >>> >>> Type transformation, once done, is a constant. As we add more Hot Rod >>> operations, it's mostly the parsing that starts to become more work. >>> Network can also become more work if instead of RPC commands you >>> start supporting streams based commands. >>> >>> gRPC solves the network (FYI: with key as HTTP header and >>> SubchannelPicker you can do hash-aware routing) and parsing for us. I >>> don't see the need for it to solve our type transformations for us. >>> If it does it, great, but does it support our compatibility >>> requirements? (I had already told Vittorio to check Gustavo on this). >>> Type transformation is a lower prio for me, network and parsing are >>> more important. >>> >>> Hope this clarifies better my POV. >>> >>> Cheers >>> >>> >>> >>> Adrian >>> >>> >>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >>>> Thanks Adrian, >>>> >>>> of course there's a marshalling work under the cover and that is >>>> reflected into the generated code (specially the accessor >>>> methods generated from the oneof clause). >>>> >>>> My opinion is that on the client side this could be accepted, as >>>> long as the API are well defined and documented: application >>>> developer can build an adhoc decorator on the top if needed. The >>>> alternative to this is to develop a protostream equivalent for >>>> each supported language and it doesn't seem really feasible to me. >>>> >>>> On the server side (java only) the situation is different: >>>> protobuf is optimized for streaming not for storing so probably >>>> a Protostream layer is needed. >>>> >>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>>> > wrote: >>>> >>>> Hi Vittorio, >>>> thanks for exploring gRPC. It seems like a very elegant >>>> solution for exposing services. I'll have a look at your PoC >>>> soon. >>>> >>>> I feel there are some remarks that need to be made regarding >>>> gRPC. gRPC is just some nice cheesy topping on top of >>>> protobuf. Google's implementation of protobuf, to be more >>>> precise. >>>> It does not need handwritten marshallers, but the 'No need >>>> for marshaller' does not accurately describe it. Marshallers >>>> are needed and are generated under the cover by the library >>>> and so are the data objects and you are unfortunately forced >>>> to use them. That's both the good news and the bad news:) >>>> The whole thing looks very promising and friendly for many >>>> uses cases, especially for demos and PoCs :))). Nobody wants >>>> to write those marshallers. But it starts to become a >>>> nuisance if you want to use your own data objects. >>>> There is also the ugliness and excessive memory footprint of >>>> the generated code, which is the reason Infinispan did not >>>> adopt the protobuf-java library although it did adopt >>>> protobuf as an encoding format. >>>> The Protostream library was created as an alternative >>>> implementation to solve the aforementioned problems with the >>>> generated code. It solves this by letting the user provide >>>> their own data objects. And for the marshallers it gives you >>>> two options: a) write the marshaller yourself (hated), b) >>>> annotated your data objects and the marshaller gets >>>> generated (loved). Protostream does not currently support >>>> service definitions right now but this is something I >>>> started to investigate recently after Galder asked me if I >>>> think it's doable. I think I'll only find out after I do it:) >>>> >>>> Adrian >>>> >>>> >>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>>> Hi Infinispan developers, >>>>> >>>>> I'm working on a solution for developers who need to access >>>>> Infinispan services? through different programming languages. >>>>> >>>>> The focus is not on developing a full featured client, but >>>>> rather discover the value and the limits of this approach. >>>>> >>>>> - is it possible to automatically generate useful clients >>>>> in different languages? >>>>> - can that clients interoperate on the same cache with the >>>>> same data types? >>>>> >>>>> I came out with a small prototype that I would like to >>>>> submit to you and on which I would like to gather your >>>>> impressions. >>>>> >>>>> ?You can found the project here [1]: is a gRPC-based >>>>> client/server architecture for Infinispan based on and >>>>> EmbeddedCache, with very few features exposed atm. >>>>> >>>>> Currently the project is nothing more than a poc with the >>>>> following interesting features: >>>>> >>>>> - client can be generated in all the grpc supported >>>>> language: java, go, c++ examples are provided; >>>>> - the interface is full typed. No need for marshaller and >>>>> clients build in different language can cooperate on the >>>>> same cache; >>>>> >>>>> The second item is my preferred one beacuse it frees the >>>>> developer from data marshalling. >>>>> >>>>> What do you think about? >>>>> Sounds interesting? >>>>> Can you see any flaw? >>>>> >>>>> There's also a list of issues for the future [2], basically >>>>> I would like to investigate these questions: >>>>> How far this architecture can go? >>>>> Topology, events, queries... how many of the Infinispan >>>>> features can be fit in a grpc architecture? >>>>> >>>>> Thank you >>>>> Vittorio >>>>> >>>>> [1] https://github.com/rigazilla/ispn-grpc >>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>>> >>>>> -- >>>>> >>>>> Vittorio Rigamonti >>>>> >>>>> Senior Software Engineer >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> Milan, Italy >>>>> >>>>> vrigamon at redhat.com >>>>> >>>>> irc: rigazilla >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> Vittorio Rigamonti >>>> >>>> Senior Software Engineer >>>> >>>> Red Hat >>>> >>>> >>>> >>>> Milan, Italy >>>> >>>> vrigamon at redhat.com >>>> >>>> irc: rigazilla >>>> >>>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Wed May 30 08:53:45 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 30 May 2018 13:53:45 +0100 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: On 30 May 2018 at 13:26, Adrian Nistor wrote: > Yest, the client needs that hash but that does not necessarily mean it > has to compute it itself. > The hash should be applied to the storage format which might be > different from the format the client sees. So hash computation could be > done on the server, just a thought. Unless we want to explore some form of hybrid gRPC which benefits from Hot Rod intelligence level 3? In which case the client will need to compute the hash before it can hint the network layer were to connect to. Thanks, Sanne > > On 05/30/2018 02:47 PM, Radim Vansa wrote: >> On 05/30/2018 12:46 PM, Adrian Nistor wrote: >>> Thanks for clarifying this Galder. >>> Yes, the network layer is indeed the culprit and the purpose of this >>> experiment. >>> >>> What is the approach you envision regarding the IDL? Should we strive >>> for a pure IDL definition of the service? That could be an interesting >>> approach that would make it possible for a third party to generate >>> their own infinispan grpc client in any new language that we do not >>> already offer support, just based on the IDL. And maybe using a >>> different grpc implementation if they do not find suitable the one >>> from google. >>> >>> I was not suggesting we should do type transformation or anything on >>> the client side that would require an extra layer of code on top of >>> what grpc generates for the client, so maybe a pure IDL based service >>> definition would indeed be possible, without extra helpers. No type >>> transformation, just type information. Exposing the type info that >>> comes from the server would be enough, a lot better than dumbing >>> everything down to a byte[]. >> I may be wrong but key transformation on client is necessary for correct >> hash-aware routing, isn't it? We need to get byte array for each key and >> apply murmur hash there (IIUC even when we use protobuf as the storage >> format, segment is based on the raw protobuf bytes, right?). >> >> Radim >> >>> Adrian >>> >>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >>> > wrote: >>>> >>>> Vittorio, a few remarks regarding your statement "...The >>>> alternative to this is to develop a protostream equivalent for >>>> each supported language and it doesn't seem really feasible to me." >>>> >>>> No way! That's a big misunderstanding. We do not need to >>>> re-implement the protostream library in C/C++/C# or any new >>>> supported language. >>>> Protostream is just for Java and it is compatible with Google's >>>> protobuf lib we already use in the other clients. We can continue >>>> using Google's protobuf lib for these clients, with or without gRPC. >>>> Protostream does not handle protobuf services as gRPC does, but >>>> we can add support for that with little effort. >>>> >>>> The real problem here is if we want to replace our hot rod >>>> invocation protocol with gRPC to save on the effort of >>>> implementing and maintaining hot rod in all those clients. I >>>> wonder why the obvious question is being avoided in this thread. >>>> >>>> >>>> ^ It is not being avoided. I stated it quite clearly when I replied >>>> but maybe not with enough detail. So, I said: >>>> >>>>> The biggest problem I see in our client/server architecture is the >>>> ability to quickly deliver features/APIs across multiple language >>>> clients. Both Vittorio and I have seen how long it takes to implement >>>> all the different features available in Java client and port them to >>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >>>> improve on that by having some of that work done for us. Granted, not >>>> all of it will be done, but it should give us some good foundations >>>> on which to build. >>>> >>>> To expand on it a bit further: the reason it takes us longer to get >>>> different features in is because each client implements its own >>>> network layer, parses the protocol and does type transformations >>>> (between byte[] and whatever the client expects). >>>> >>>> IMO, the most costly things there are getting the network layer right >>>> (from experience with Node.js, it has taken a while to do so) and >>>> parsing work (not only parsing itself, but doing it in a efficient >>>> way). Network layer also includes load balancing, failover, cluster >>>> failover...etc. >>>> >>>> From past experience, transforming from byte[] to what the client >>>> expects has never really been very problematic for me. What's been >>>> difficult here is coming up with encoding architecture that Gustavo >>>> lead, whose aim was to improve on the initial compatibility mode. >>>> But, with that now clear, understood and proven to solve our issues, >>>> the rest in this area should be fairly straightforward IMO. >>>> >>>> Type transformation, once done, is a constant. As we add more Hot Rod >>>> operations, it's mostly the parsing that starts to become more work. >>>> Network can also become more work if instead of RPC commands you >>>> start supporting streams based commands. >>>> >>>> gRPC solves the network (FYI: with key as HTTP header and >>>> SubchannelPicker you can do hash-aware routing) and parsing for us. I >>>> don't see the need for it to solve our type transformations for us. >>>> If it does it, great, but does it support our compatibility >>>> requirements? (I had already told Vittorio to check Gustavo on this). >>>> Type transformation is a lower prio for me, network and parsing are >>>> more important. >>>> >>>> Hope this clarifies better my POV. >>>> >>>> Cheers >>>> >>>> >>>> >>>> Adrian >>>> >>>> >>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >>>>> Thanks Adrian, >>>>> >>>>> of course there's a marshalling work under the cover and that is >>>>> reflected into the generated code (specially the accessor >>>>> methods generated from the oneof clause). >>>>> >>>>> My opinion is that on the client side this could be accepted, as >>>>> long as the API are well defined and documented: application >>>>> developer can build an adhoc decorator on the top if needed. The >>>>> alternative to this is to develop a protostream equivalent for >>>>> each supported language and it doesn't seem really feasible to me. >>>>> >>>>> On the server side (java only) the situation is different: >>>>> protobuf is optimized for streaming not for storing so probably >>>>> a Protostream layer is needed. >>>>> >>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>>>> > wrote: >>>>> >>>>> Hi Vittorio, >>>>> thanks for exploring gRPC. It seems like a very elegant >>>>> solution for exposing services. I'll have a look at your PoC >>>>> soon. >>>>> >>>>> I feel there are some remarks that need to be made regarding >>>>> gRPC. gRPC is just some nice cheesy topping on top of >>>>> protobuf. Google's implementation of protobuf, to be more >>>>> precise. >>>>> It does not need handwritten marshallers, but the 'No need >>>>> for marshaller' does not accurately describe it. Marshallers >>>>> are needed and are generated under the cover by the library >>>>> and so are the data objects and you are unfortunately forced >>>>> to use them. That's both the good news and the bad news:) >>>>> The whole thing looks very promising and friendly for many >>>>> uses cases, especially for demos and PoCs :))). Nobody wants >>>>> to write those marshallers. But it starts to become a >>>>> nuisance if you want to use your own data objects. >>>>> There is also the ugliness and excessive memory footprint of >>>>> the generated code, which is the reason Infinispan did not >>>>> adopt the protobuf-java library although it did adopt >>>>> protobuf as an encoding format. >>>>> The Protostream library was created as an alternative >>>>> implementation to solve the aforementioned problems with the >>>>> generated code. It solves this by letting the user provide >>>>> their own data objects. And for the marshallers it gives you >>>>> two options: a) write the marshaller yourself (hated), b) >>>>> annotated your data objects and the marshaller gets >>>>> generated (loved). Protostream does not currently support >>>>> service definitions right now but this is something I >>>>> started to investigate recently after Galder asked me if I >>>>> think it's doable. I think I'll only find out after I do it:) >>>>> >>>>> Adrian >>>>> >>>>> >>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>>>> Hi Infinispan developers, >>>>>> >>>>>> I'm working on a solution for developers who need to access >>>>>> Infinispan services through different programming languages. >>>>>> >>>>>> The focus is not on developing a full featured client, but >>>>>> rather discover the value and the limits of this approach. >>>>>> >>>>>> - is it possible to automatically generate useful clients >>>>>> in different languages? >>>>>> - can that clients interoperate on the same cache with the >>>>>> same data types? >>>>>> >>>>>> I came out with a small prototype that I would like to >>>>>> submit to you and on which I would like to gather your >>>>>> impressions. >>>>>> >>>>>> You can found the project here [1]: is a gRPC-based >>>>>> client/server architecture for Infinispan based on and >>>>>> EmbeddedCache, with very few features exposed atm. >>>>>> >>>>>> Currently the project is nothing more than a poc with the >>>>>> following interesting features: >>>>>> >>>>>> - client can be generated in all the grpc supported >>>>>> language: java, go, c++ examples are provided; >>>>>> - the interface is full typed. No need for marshaller and >>>>>> clients build in different language can cooperate on the >>>>>> same cache; >>>>>> >>>>>> The second item is my preferred one beacuse it frees the >>>>>> developer from data marshalling. >>>>>> >>>>>> What do you think about? >>>>>> Sounds interesting? >>>>>> Can you see any flaw? >>>>>> >>>>>> There's also a list of issues for the future [2], basically >>>>>> I would like to investigate these questions: >>>>>> How far this architecture can go? >>>>>> Topology, events, queries... how many of the Infinispan >>>>>> features can be fit in a grpc architecture? >>>>>> >>>>>> Thank you >>>>>> Vittorio >>>>>> >>>>>> [1] https://github.com/rigazilla/ispn-grpc >>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>>>> >>>>>> -- >>>>>> >>>>>> Vittorio Rigamonti >>>>>> >>>>>> Senior Software Engineer >>>>>> >>>>>> Red Hat >>>>>> >>>>>> >>>>>> >>>>>> Milan, Italy >>>>>> >>>>>> vrigamon at redhat.com >>>>>> >>>>>> irc: rigazilla >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> Vittorio Rigamonti >>>>> >>>>> Senior Software Engineer >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> Milan, Italy >>>>> >>>>> vrigamon at redhat.com >>>>> >>>>> irc: rigazilla >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed May 30 11:00:16 2018 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 30 May 2018 17:00:16 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: On 05/30/2018 02:53 PM, Sanne Grinovero wrote: > On 30 May 2018 at 13:26, Adrian Nistor wrote: >> Yest, the client needs that hash but that does not necessarily mean it >> has to compute it itself. >> The hash should be applied to the storage format which might be >> different from the format the client sees. So hash computation could be >> done on the server, just a thought. > Unless we want to explore some form of hybrid gRPC which benefits from > Hot Rod intelligence level 3? Since Tristan said that gRPC is viable only if the performance is comparable - I concluded that this involves the smart routing. I was hoping that gRPC networking layer would provide some hook to specify the destination. An alternative would be a proxy hosted on the same node that would do the routing. If we're to replace Hot Rod I was expecting the (generated) gRPC client to be extensible enough to allow us add client-side features (like near cache, maybe listeners would need client-side code too) but saving us most of the hassle with networking and parsing, while providing basic client in languages we don't embrace without additional cost. R. > > In which case the client will need to compute the hash before it can > hint the network layer were to connect to. > > Thanks, > Sanne > >> On 05/30/2018 02:47 PM, Radim Vansa wrote: >>> On 05/30/2018 12:46 PM, Adrian Nistor wrote: >>>> Thanks for clarifying this Galder. >>>> Yes, the network layer is indeed the culprit and the purpose of this >>>> experiment. >>>> >>>> What is the approach you envision regarding the IDL? Should we strive >>>> for a pure IDL definition of the service? That could be an interesting >>>> approach that would make it possible for a third party to generate >>>> their own infinispan grpc client in any new language that we do not >>>> already offer support, just based on the IDL. And maybe using a >>>> different grpc implementation if they do not find suitable the one >>>> from google. >>>> >>>> I was not suggesting we should do type transformation or anything on >>>> the client side that would require an extra layer of code on top of >>>> what grpc generates for the client, so maybe a pure IDL based service >>>> definition would indeed be possible, without extra helpers. No type >>>> transformation, just type information. Exposing the type info that >>>> comes from the server would be enough, a lot better than dumbing >>>> everything down to a byte[]. >>> I may be wrong but key transformation on client is necessary for correct >>> hash-aware routing, isn't it? We need to get byte array for each key and >>> apply murmur hash there (IIUC even when we use protobuf as the storage >>> format, segment is based on the raw protobuf bytes, right?). >>> >>> Radim >>> >>>> Adrian >>>> >>>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >>>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >>>> > wrote: >>>>> >>>>> Vittorio, a few remarks regarding your statement "...The >>>>> alternative to this is to develop a protostream equivalent for >>>>> each supported language and it doesn't seem really feasible to me." >>>>> >>>>> No way! That's a big misunderstanding. We do not need to >>>>> re-implement the protostream library in C/C++/C# or any new >>>>> supported language. >>>>> Protostream is just for Java and it is compatible with Google's >>>>> protobuf lib we already use in the other clients. We can continue >>>>> using Google's protobuf lib for these clients, with or without gRPC. >>>>> Protostream does not handle protobuf services as gRPC does, but >>>>> we can add support for that with little effort. >>>>> >>>>> The real problem here is if we want to replace our hot rod >>>>> invocation protocol with gRPC to save on the effort of >>>>> implementing and maintaining hot rod in all those clients. I >>>>> wonder why the obvious question is being avoided in this thread. >>>>> >>>>> >>>>> ^ It is not being avoided. I stated it quite clearly when I replied >>>>> but maybe not with enough detail. So, I said: >>>>> >>>>>> The biggest problem I see in our client/server architecture is the >>>>> ability to quickly deliver features/APIs across multiple language >>>>> clients. Both Vittorio and I have seen how long it takes to implement >>>>> all the different features available in Java client and port them to >>>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >>>>> improve on that by having some of that work done for us. Granted, not >>>>> all of it will be done, but it should give us some good foundations >>>>> on which to build. >>>>> >>>>> To expand on it a bit further: the reason it takes us longer to get >>>>> different features in is because each client implements its own >>>>> network layer, parses the protocol and does type transformations >>>>> (between byte[] and whatever the client expects). >>>>> >>>>> IMO, the most costly things there are getting the network layer right >>>>> (from experience with Node.js, it has taken a while to do so) and >>>>> parsing work (not only parsing itself, but doing it in a efficient >>>>> way). Network layer also includes load balancing, failover, cluster >>>>> failover...etc. >>>>> >>>>> From past experience, transforming from byte[] to what the client >>>>> expects has never really been very problematic for me. What's been >>>>> difficult here is coming up with encoding architecture that Gustavo >>>>> lead, whose aim was to improve on the initial compatibility mode. >>>>> But, with that now clear, understood and proven to solve our issues, >>>>> the rest in this area should be fairly straightforward IMO. >>>>> >>>>> Type transformation, once done, is a constant. As we add more Hot Rod >>>>> operations, it's mostly the parsing that starts to become more work. >>>>> Network can also become more work if instead of RPC commands you >>>>> start supporting streams based commands. >>>>> >>>>> gRPC solves the network (FYI: with key as HTTP header and >>>>> SubchannelPicker you can do hash-aware routing) and parsing for us. I >>>>> don't see the need for it to solve our type transformations for us. >>>>> If it does it, great, but does it support our compatibility >>>>> requirements? (I had already told Vittorio to check Gustavo on this). >>>>> Type transformation is a lower prio for me, network and parsing are >>>>> more important. >>>>> >>>>> Hope this clarifies better my POV. >>>>> >>>>> Cheers >>>>> >>>>> >>>>> >>>>> Adrian >>>>> >>>>> >>>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >>>>>> Thanks Adrian, >>>>>> >>>>>> of course there's a marshalling work under the cover and that is >>>>>> reflected into the generated code (specially the accessor >>>>>> methods generated from the oneof clause). >>>>>> >>>>>> My opinion is that on the client side this could be accepted, as >>>>>> long as the API are well defined and documented: application >>>>>> developer can build an adhoc decorator on the top if needed. The >>>>>> alternative to this is to develop a protostream equivalent for >>>>>> each supported language and it doesn't seem really feasible to me. >>>>>> >>>>>> On the server side (java only) the situation is different: >>>>>> protobuf is optimized for streaming not for storing so probably >>>>>> a Protostream layer is needed. >>>>>> >>>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>>>>> > wrote: >>>>>> >>>>>> Hi Vittorio, >>>>>> thanks for exploring gRPC. It seems like a very elegant >>>>>> solution for exposing services. I'll have a look at your PoC >>>>>> soon. >>>>>> >>>>>> I feel there are some remarks that need to be made regarding >>>>>> gRPC. gRPC is just some nice cheesy topping on top of >>>>>> protobuf. Google's implementation of protobuf, to be more >>>>>> precise. >>>>>> It does not need handwritten marshallers, but the 'No need >>>>>> for marshaller' does not accurately describe it. Marshallers >>>>>> are needed and are generated under the cover by the library >>>>>> and so are the data objects and you are unfortunately forced >>>>>> to use them. That's both the good news and the bad news:) >>>>>> The whole thing looks very promising and friendly for many >>>>>> uses cases, especially for demos and PoCs :))). Nobody wants >>>>>> to write those marshallers. But it starts to become a >>>>>> nuisance if you want to use your own data objects. >>>>>> There is also the ugliness and excessive memory footprint of >>>>>> the generated code, which is the reason Infinispan did not >>>>>> adopt the protobuf-java library although it did adopt >>>>>> protobuf as an encoding format. >>>>>> The Protostream library was created as an alternative >>>>>> implementation to solve the aforementioned problems with the >>>>>> generated code. It solves this by letting the user provide >>>>>> their own data objects. And for the marshallers it gives you >>>>>> two options: a) write the marshaller yourself (hated), b) >>>>>> annotated your data objects and the marshaller gets >>>>>> generated (loved). Protostream does not currently support >>>>>> service definitions right now but this is something I >>>>>> started to investigate recently after Galder asked me if I >>>>>> think it's doable. I think I'll only find out after I do it:) >>>>>> >>>>>> Adrian >>>>>> >>>>>> >>>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>>>>>> Hi Infinispan developers, >>>>>>> >>>>>>> I'm working on a solution for developers who need to access >>>>>>> Infinispan services through different programming languages. >>>>>>> >>>>>>> The focus is not on developing a full featured client, but >>>>>>> rather discover the value and the limits of this approach. >>>>>>> >>>>>>> - is it possible to automatically generate useful clients >>>>>>> in different languages? >>>>>>> - can that clients interoperate on the same cache with the >>>>>>> same data types? >>>>>>> >>>>>>> I came out with a small prototype that I would like to >>>>>>> submit to you and on which I would like to gather your >>>>>>> impressions. >>>>>>> >>>>>>> You can found the project here [1]: is a gRPC-based >>>>>>> client/server architecture for Infinispan based on and >>>>>>> EmbeddedCache, with very few features exposed atm. >>>>>>> >>>>>>> Currently the project is nothing more than a poc with the >>>>>>> following interesting features: >>>>>>> >>>>>>> - client can be generated in all the grpc supported >>>>>>> language: java, go, c++ examples are provided; >>>>>>> - the interface is full typed. No need for marshaller and >>>>>>> clients build in different language can cooperate on the >>>>>>> same cache; >>>>>>> >>>>>>> The second item is my preferred one beacuse it frees the >>>>>>> developer from data marshalling. >>>>>>> >>>>>>> What do you think about? >>>>>>> Sounds interesting? >>>>>>> Can you see any flaw? >>>>>>> >>>>>>> There's also a list of issues for the future [2], basically >>>>>>> I would like to investigate these questions: >>>>>>> How far this architecture can go? >>>>>>> Topology, events, queries... how many of the Infinispan >>>>>>> features can be fit in a grpc architecture? >>>>>>> >>>>>>> Thank you >>>>>>> Vittorio >>>>>>> >>>>>>> [1] https://github.com/rigazilla/ispn-grpc >>>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> Vittorio Rigamonti >>>>>>> >>>>>>> Senior Software Engineer >>>>>>> >>>>>>> Red Hat >>>>>>> >>>>>>> >>>>>>> >>>>>>> Milan, Italy >>>>>>> >>>>>>> vrigamon at redhat.com >>>>>>> >>>>>>> irc: rigazilla >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> Vittorio Rigamonti >>>>>> >>>>>> Senior Software Engineer >>>>>> >>>>>> Red Hat >>>>>> >>>>>> >>>>>> >>>>>> Milan, Italy >>>>>> >>>>>> vrigamon at redhat.com >>>>>> >>>>>> irc: rigazilla >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From galder at redhat.com Wed May 30 11:08:25 2018 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 30 May 2018 17:08:25 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: On Wed, May 30, 2018 at 5:00 PM Radim Vansa wrote: > On 05/30/2018 02:53 PM, Sanne Grinovero wrote: > > On 30 May 2018 at 13:26, Adrian Nistor wrote: > >> Yest, the client needs that hash but that does not necessarily mean it > >> has to compute it itself. > >> The hash should be applied to the storage format which might be > >> different from the format the client sees. So hash computation could be > >> done on the server, just a thought. > > Unless we want to explore some form of hybrid gRPC which benefits from > > Hot Rod intelligence level 3? > > Since Tristan said that gRPC is viable only if the performance is > comparable - I concluded that this involves the smart routing. I was > hoping that gRPC networking layer would provide some hook to specify the > destination. It does, via SubchannelPicker implementations. It requires key to be sent as HTTP header down the stack so that the SubchannelPicker can extract it. SubchannelPicker impl can then apply hash on it and decide based on available channels. > An alternative would be a proxy hosted on the same node > that would do the routing. > If we're to replace Hot Rod I was expecting the (generated) gRPC client > to be extensible enough to allow us add client-side features (like near > cache, maybe listeners would need client-side code too) but saving us > most of the hassle with networking and parsing, while providing basic > client in languages we don't embrace without additional cost. > > R. > > > > > In which case the client will need to compute the hash before it can > > hint the network layer were to connect to. > > > > Thanks, > > Sanne > > > >> On 05/30/2018 02:47 PM, Radim Vansa wrote: > >>> On 05/30/2018 12:46 PM, Adrian Nistor wrote: > >>>> Thanks for clarifying this Galder. > >>>> Yes, the network layer is indeed the culprit and the purpose of this > >>>> experiment. > >>>> > >>>> What is the approach you envision regarding the IDL? Should we strive > >>>> for a pure IDL definition of the service? That could be an interesting > >>>> approach that would make it possible for a third party to generate > >>>> their own infinispan grpc client in any new language that we do not > >>>> already offer support, just based on the IDL. And maybe using a > >>>> different grpc implementation if they do not find suitable the one > >>>> from google. > >>>> > >>>> I was not suggesting we should do type transformation or anything on > >>>> the client side that would require an extra layer of code on top of > >>>> what grpc generates for the client, so maybe a pure IDL based service > >>>> definition would indeed be possible, without extra helpers. No type > >>>> transformation, just type information. Exposing the type info that > >>>> comes from the server would be enough, a lot better than dumbing > >>>> everything down to a byte[]. > >>> I may be wrong but key transformation on client is necessary for > correct > >>> hash-aware routing, isn't it? We need to get byte array for each key > and > >>> apply murmur hash there (IIUC even when we use protobuf as the storage > >>> format, segment is based on the raw protobuf bytes, right?). > >>> > >>> Radim > >>> > >>>> Adrian > >>>> > >>>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: > >>>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >>>>> > wrote: > >>>>> > >>>>> Vittorio, a few remarks regarding your statement "...The > >>>>> alternative to this is to develop a protostream equivalent for > >>>>> each supported language and it doesn't seem really feasible to > me." > >>>>> > >>>>> No way! That's a big misunderstanding. We do not need to > >>>>> re-implement the protostream library in C/C++/C# or any new > >>>>> supported language. > >>>>> Protostream is just for Java and it is compatible with Google's > >>>>> protobuf lib we already use in the other clients. We can > continue > >>>>> using Google's protobuf lib for these clients, with or without > gRPC. > >>>>> Protostream does not handle protobuf services as gRPC does, but > >>>>> we can add support for that with little effort. > >>>>> > >>>>> The real problem here is if we want to replace our hot rod > >>>>> invocation protocol with gRPC to save on the effort of > >>>>> implementing and maintaining hot rod in all those clients. I > >>>>> wonder why the obvious question is being avoided in this > thread. > >>>>> > >>>>> > >>>>> ^ It is not being avoided. I stated it quite clearly when I replied > >>>>> but maybe not with enough detail. So, I said: > >>>>> > >>>>>> The biggest problem I see in our client/server architecture is > the > >>>>> ability to quickly deliver features/APIs across multiple language > >>>>> clients. Both Vittorio and I have seen how long it takes to implement > >>>>> all the different features available in Java client and port them to > >>>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to > >>>>> improve on that by having some of that work done for us. Granted, not > >>>>> all of it will be done, but it should give us some good foundations > >>>>> on which to build. > >>>>> > >>>>> To expand on it a bit further: the reason it takes us longer to get > >>>>> different features in is because each client implements its own > >>>>> network layer, parses the protocol and does type transformations > >>>>> (between byte[] and whatever the client expects). > >>>>> > >>>>> IMO, the most costly things there are getting the network layer right > >>>>> (from experience with Node.js, it has taken a while to do so) and > >>>>> parsing work (not only parsing itself, but doing it in a efficient > >>>>> way). Network layer also includes load balancing, failover, cluster > >>>>> failover...etc. > >>>>> > >>>>> From past experience, transforming from byte[] to what the client > >>>>> expects has never really been very problematic for me. What's been > >>>>> difficult here is coming up with encoding architecture that Gustavo > >>>>> lead, whose aim was to improve on the initial compatibility mode. > >>>>> But, with that now clear, understood and proven to solve our issues, > >>>>> the rest in this area should be fairly straightforward IMO. > >>>>> > >>>>> Type transformation, once done, is a constant. As we add more Hot Rod > >>>>> operations, it's mostly the parsing that starts to become more work. > >>>>> Network can also become more work if instead of RPC commands you > >>>>> start supporting streams based commands. > >>>>> > >>>>> gRPC solves the network (FYI: with key as HTTP header and > >>>>> SubchannelPicker you can do hash-aware routing) and parsing for us. I > >>>>> don't see the need for it to solve our type transformations for us. > >>>>> If it does it, great, but does it support our compatibility > >>>>> requirements? (I had already told Vittorio to check Gustavo on this). > >>>>> Type transformation is a lower prio for me, network and parsing are > >>>>> more important. > >>>>> > >>>>> Hope this clarifies better my POV. > >>>>> > >>>>> Cheers > >>>>> > >>>>> > >>>>> > >>>>> Adrian > >>>>> > >>>>> > >>>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: > >>>>>> Thanks Adrian, > >>>>>> > >>>>>> of course there's a marshalling work under the cover and that > is > >>>>>> reflected into the generated code (specially the accessor > >>>>>> methods generated from the oneof clause). > >>>>>> > >>>>>> My opinion is that on the client side this could be accepted, > as > >>>>>> long as the API are well defined and documented: application > >>>>>> developer can build an adhoc decorator on the top if needed. > The > >>>>>> alternative to this is to develop a protostream equivalent for > >>>>>> each supported language and it doesn't seem really feasible > to me. > >>>>>> > >>>>>> On the server side (java only) the situation is different: > >>>>>> protobuf is optimized for streaming not for storing so > probably > >>>>>> a Protostream layer is needed. > >>>>>> > >>>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor > >>>>>> > wrote: > >>>>>> > >>>>>> Hi Vittorio, > >>>>>> thanks for exploring gRPC. It seems like a very elegant > >>>>>> solution for exposing services. I'll have a look at your > PoC > >>>>>> soon. > >>>>>> > >>>>>> I feel there are some remarks that need to be made > regarding > >>>>>> gRPC. gRPC is just some nice cheesy topping on top of > >>>>>> protobuf. Google's implementation of protobuf, to be more > >>>>>> precise. > >>>>>> It does not need handwritten marshallers, but the 'No need > >>>>>> for marshaller' does not accurately describe it. > Marshallers > >>>>>> are needed and are generated under the cover by the > library > >>>>>> and so are the data objects and you are unfortunately > forced > >>>>>> to use them. That's both the good news and the bad news:) > >>>>>> The whole thing looks very promising and friendly for many > >>>>>> uses cases, especially for demos and PoCs :))). Nobody > wants > >>>>>> to write those marshallers. But it starts to become a > >>>>>> nuisance if you want to use your own data objects. > >>>>>> There is also the ugliness and excessive memory footprint > of > >>>>>> the generated code, which is the reason Infinispan did not > >>>>>> adopt the protobuf-java library although it did adopt > >>>>>> protobuf as an encoding format. > >>>>>> The Protostream library was created as an alternative > >>>>>> implementation to solve the aforementioned problems with > the > >>>>>> generated code. It solves this by letting the user provide > >>>>>> their own data objects. And for the marshallers it gives > you > >>>>>> two options: a) write the marshaller yourself (hated), b) > >>>>>> annotated your data objects and the marshaller gets > >>>>>> generated (loved). Protostream does not currently support > >>>>>> service definitions right now but this is something I > >>>>>> started to investigate recently after Galder asked me if I > >>>>>> think it's doable. I think I'll only find out after I do > it:) > >>>>>> > >>>>>> Adrian > >>>>>> > >>>>>> > >>>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: > >>>>>>> Hi Infinispan developers, > >>>>>>> > >>>>>>> I'm working on a solution for developers who need to > access > >>>>>>> Infinispan services through different programming > languages. > >>>>>>> > >>>>>>> The focus is not on developing a full featured client, > but > >>>>>>> rather discover the value and the limits of this > approach. > >>>>>>> > >>>>>>> - is it possible to automatically generate useful clients > >>>>>>> in different languages? > >>>>>>> - can that clients interoperate on the same cache with > the > >>>>>>> same data types? > >>>>>>> > >>>>>>> I came out with a small prototype that I would like to > >>>>>>> submit to you and on which I would like to gather your > >>>>>>> impressions. > >>>>>>> > >>>>>>> You can found the project here [1]: is a gRPC-based > >>>>>>> client/server architecture for Infinispan based on and > >>>>>>> EmbeddedCache, with very few features exposed atm. > >>>>>>> > >>>>>>> Currently the project is nothing more than a poc with the > >>>>>>> following interesting features: > >>>>>>> > >>>>>>> - client can be generated in all the grpc supported > >>>>>>> language: java, go, c++ examples are provided; > >>>>>>> - the interface is full typed. No need for marshaller and > >>>>>>> clients build in different language can cooperate on the > >>>>>>> same cache; > >>>>>>> > >>>>>>> The second item is my preferred one beacuse it frees the > >>>>>>> developer from data marshalling. > >>>>>>> > >>>>>>> What do you think about? > >>>>>>> Sounds interesting? > >>>>>>> Can you see any flaw? > >>>>>>> > >>>>>>> There's also a list of issues for the future [2], > basically > >>>>>>> I would like to investigate these questions: > >>>>>>> How far this architecture can go? > >>>>>>> Topology, events, queries... how many of the Infinispan > >>>>>>> features can be fit in a grpc architecture? > >>>>>>> > >>>>>>> Thank you > >>>>>>> Vittorio > >>>>>>> > >>>>>>> [1] https://github.com/rigazilla/ispn-grpc > >>>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues > >>>>>>> > >>>>>>> -- > >>>>>>> > >>>>>>> Vittorio Rigamonti > >>>>>>> > >>>>>>> Senior Software Engineer > >>>>>>> > >>>>>>> Red Hat > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Milan, Italy > >>>>>>> > >>>>>>> vrigamon at redhat.com > >>>>>>> > >>>>>>> irc: rigazilla > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> infinispan-dev mailing list > >>>>>>> infinispan-dev at lists.jboss.org > >>>>>>> > >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> > >>>>>> Vittorio Rigamonti > >>>>>> > >>>>>> Senior Software Engineer > >>>>>> > >>>>>> Red Hat > >>>>>> > >>>>>> > >>>>>> > >>>>>> Milan, Italy > >>>>>> > >>>>>> vrigamon at redhat.com > >>>>>> > >>>>>> irc: rigazilla > >>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180530/9c78d03c/attachment-0001.html