From vrigamon at redhat.com Mon Jun 4 04:03:51 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Mon, 4 Jun 2018 10:03:51 +0200 Subject: [infinispan-dev] Infinispan 9.3.0.CR1 is out! Message-ID: Dear all, we have released Infinispan 9.3.0.CR1. Read all about it here: https://blog.infinispan.org/2018/06/infinispan-930cr1.html Enjoy! Vittorio -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180604/9aa68b02/attachment.html From vrigamon at redhat.com Tue Jun 5 08:03:39 2018 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Tue, 5 Jun 2018 14:03:39 +0200 Subject: [infinispan-dev] weekly zulip meeting Message-ID: Hi All, I'll miss the chat meeting today, sorry. Here's my status: last week I sent an email to the infinispan-dev ML to gather opinions about my project for an ispn client based on grpc. >From the feedbacks seems that the community is more interested on how grpc can save us work on the networking layer (topology, retry, security) than on my work about user data model representation. So I will probably change my focus and I'll do some more work on that direction. Apart from that I'm about to complete my work on counters in C# (HRCPP-457): finishing the listener implementation and the test suite. in I've also released the 9.3.0.CR1 Infinispan version. This week I'll work on parametrized queries (HRCPP-339) and I also would like to have the work on HRCPP-454 455 merge so I can build a release for C clients. Cheers, Vittorio -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180605/baa7d207/attachment.html From manik at infinispan.org Wed Jun 6 02:13:38 2018 From: manik at infinispan.org (Manik Surtani) Date: Tue, 5 Jun 2018 23:13:38 -0700 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: Hello everyone! (Wow, it's been a while since I dropped by and said hello... ) Super-interesting discussion. Adrian: "What is the approach you envision regarding the IDL? Should we strive for a pure IDL definition of the service? That could be an interesting approach that would make it possible for a third party to generate their own infinispan grpc client in any new language that we do not already offer support, just based on the IDL. And maybe using a different grpc implementation if they do not find suitable the one from google." This is spot-on, and where I see value in gRPC being incorporated into Infinispan: making use of open(-ish) standards of RPC communication and applying that to how we do client/server. Good discussion on handling payload types in the interface definition. I've made use of some of the ideas discussed here when creating a proto-defined IDL to look up ... more serialized proto definitions for given types! Keen to see what your PoC looks like. - M On Wed, 30 May 2018 at 08:19 Galder Zamarreno wrote: > On Wed, May 30, 2018 at 5:00 PM Radim Vansa wrote: > >> On 05/30/2018 02:53 PM, Sanne Grinovero wrote: >> > On 30 May 2018 at 13:26, Adrian Nistor wrote: >> >> Yest, the client needs that hash but that does not necessarily mean it >> >> has to compute it itself. >> >> The hash should be applied to the storage format which might be >> >> different from the format the client sees. So hash computation could be >> >> done on the server, just a thought. >> > Unless we want to explore some form of hybrid gRPC which benefits from >> > Hot Rod intelligence level 3? >> >> Since Tristan said that gRPC is viable only if the performance is >> comparable - I concluded that this involves the smart routing. I was >> hoping that gRPC networking layer would provide some hook to specify the >> destination. > > > It does, via SubchannelPicker implementations. It requires key to be sent > as HTTP header down the stack so that the SubchannelPicker can extract it. > > SubchannelPicker impl can then apply hash on it and decide based on > available channels. > > >> An alternative would be a proxy hosted on the same node >> that would do the routing. > > >> If we're to replace Hot Rod I was expecting the (generated) gRPC client >> to be extensible enough to allow us add client-side features (like near >> cache, maybe listeners would need client-side code too) but saving us >> most of the hassle with networking and parsing, while providing basic >> client in languages we don't embrace without additional cost. >> >> R. >> >> > >> > In which case the client will need to compute the hash before it can >> > hint the network layer were to connect to. >> > >> > Thanks, >> > Sanne >> > >> >> On 05/30/2018 02:47 PM, Radim Vansa wrote: >> >>> On 05/30/2018 12:46 PM, Adrian Nistor wrote: >> >>>> Thanks for clarifying this Galder. >> >>>> Yes, the network layer is indeed the culprit and the purpose of this >> >>>> experiment. >> >>>> >> >>>> What is the approach you envision regarding the IDL? Should we strive >> >>>> for a pure IDL definition of the service? That could be an >> interesting >> >>>> approach that would make it possible for a third party to generate >> >>>> their own infinispan grpc client in any new language that we do not >> >>>> already offer support, just based on the IDL. And maybe using a >> >>>> different grpc implementation if they do not find suitable the one >> >>>> from google. >> >>>> >> >>>> I was not suggesting we should do type transformation or anything on >> >>>> the client side that would require an extra layer of code on top of >> >>>> what grpc generates for the client, so maybe a pure IDL based service >> >>>> definition would indeed be possible, without extra helpers. No type >> >>>> transformation, just type information. Exposing the type info that >> >>>> comes from the server would be enough, a lot better than dumbing >> >>>> everything down to a byte[]. >> >>> I may be wrong but key transformation on client is necessary for >> correct >> >>> hash-aware routing, isn't it? We need to get byte array for each key >> and >> >>> apply murmur hash there (IIUC even when we use protobuf as the storage >> >>> format, segment is based on the raw protobuf bytes, right?). >> >>> >> >>> Radim >> >>> >> >>>> Adrian >> >>>> >> >>>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >> >>>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor > >>>>> > wrote: >> >>>>> >> >>>>> Vittorio, a few remarks regarding your statement "...The >> >>>>> alternative to this is to develop a protostream equivalent for >> >>>>> each supported language and it doesn't seem really feasible >> to me." >> >>>>> >> >>>>> No way! That's a big misunderstanding. We do not need to >> >>>>> re-implement the protostream library in C/C++/C# or any new >> >>>>> supported language. >> >>>>> Protostream is just for Java and it is compatible with >> Google's >> >>>>> protobuf lib we already use in the other clients. We can >> continue >> >>>>> using Google's protobuf lib for these clients, with or >> without gRPC. >> >>>>> Protostream does not handle protobuf services as gRPC does, >> but >> >>>>> we can add support for that with little effort. >> >>>>> >> >>>>> The real problem here is if we want to replace our hot rod >> >>>>> invocation protocol with gRPC to save on the effort of >> >>>>> implementing and maintaining hot rod in all those clients. I >> >>>>> wonder why the obvious question is being avoided in this >> thread. >> >>>>> >> >>>>> >> >>>>> ^ It is not being avoided. I stated it quite clearly when I replied >> >>>>> but maybe not with enough detail. So, I said: >> >>>>> >> >>>>>> The biggest problem I see in our client/server architecture is >> the >> >>>>> ability to quickly deliver features/APIs across multiple language >> >>>>> clients. Both Vittorio and I have seen how long it takes to >> implement >> >>>>> all the different features available in Java client and port them to >> >>>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >> >>>>> improve on that by having some of that work done for us. Granted, >> not >> >>>>> all of it will be done, but it should give us some good foundations >> >>>>> on which to build. >> >>>>> >> >>>>> To expand on it a bit further: the reason it takes us longer to get >> >>>>> different features in is because each client implements its own >> >>>>> network layer, parses the protocol and does type transformations >> >>>>> (between byte[] and whatever the client expects). >> >>>>> >> >>>>> IMO, the most costly things there are getting the network layer >> right >> >>>>> (from experience with Node.js, it has taken a while to do so) and >> >>>>> parsing work (not only parsing itself, but doing it in a efficient >> >>>>> way). Network layer also includes load balancing, failover, cluster >> >>>>> failover...etc. >> >>>>> >> >>>>> From past experience, transforming from byte[] to what the client >> >>>>> expects has never really been very problematic for me. What's been >> >>>>> difficult here is coming up with encoding architecture that Gustavo >> >>>>> lead, whose aim was to improve on the initial compatibility mode. >> >>>>> But, with that now clear, understood and proven to solve our issues, >> >>>>> the rest in this area should be fairly straightforward IMO. >> >>>>> >> >>>>> Type transformation, once done, is a constant. As we add more Hot >> Rod >> >>>>> operations, it's mostly the parsing that starts to become more work. >> >>>>> Network can also become more work if instead of RPC commands you >> >>>>> start supporting streams based commands. >> >>>>> >> >>>>> gRPC solves the network (FYI: with key as HTTP header and >> >>>>> SubchannelPicker you can do hash-aware routing) and parsing for us. >> I >> >>>>> don't see the need for it to solve our type transformations for us. >> >>>>> If it does it, great, but does it support our compatibility >> >>>>> requirements? (I had already told Vittorio to check Gustavo on >> this). >> >>>>> Type transformation is a lower prio for me, network and parsing are >> >>>>> more important. >> >>>>> >> >>>>> Hope this clarifies better my POV. >> >>>>> >> >>>>> Cheers >> >>>>> >> >>>>> >> >>>>> >> >>>>> Adrian >> >>>>> >> >>>>> >> >>>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >> >>>>>> Thanks Adrian, >> >>>>>> >> >>>>>> of course there's a marshalling work under the cover and >> that is >> >>>>>> reflected into the generated code (specially the accessor >> >>>>>> methods generated from the oneof clause). >> >>>>>> >> >>>>>> My opinion is that on the client side this could be >> accepted, as >> >>>>>> long as the API are well defined and documented: application >> >>>>>> developer can build an adhoc decorator on the top if needed. >> The >> >>>>>> alternative to this is to develop a protostream equivalent >> for >> >>>>>> each supported language and it doesn't seem really feasible >> to me. >> >>>>>> >> >>>>>> On the server side (java only) the situation is different: >> >>>>>> protobuf is optimized for streaming not for storing so >> probably >> >>>>>> a Protostream layer is needed. >> >>>>>> >> >>>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >> >>>>>> > wrote: >> >>>>>> >> >>>>>> Hi Vittorio, >> >>>>>> thanks for exploring gRPC. It seems like a very elegant >> >>>>>> solution for exposing services. I'll have a look at your >> PoC >> >>>>>> soon. >> >>>>>> >> >>>>>> I feel there are some remarks that need to be made >> regarding >> >>>>>> gRPC. gRPC is just some nice cheesy topping on top of >> >>>>>> protobuf. Google's implementation of protobuf, to be more >> >>>>>> precise. >> >>>>>> It does not need handwritten marshallers, but the 'No >> need >> >>>>>> for marshaller' does not accurately describe it. >> Marshallers >> >>>>>> are needed and are generated under the cover by the >> library >> >>>>>> and so are the data objects and you are unfortunately >> forced >> >>>>>> to use them. That's both the good news and the bad news:) >> >>>>>> The whole thing looks very promising and friendly for >> many >> >>>>>> uses cases, especially for demos and PoCs :))). Nobody >> wants >> >>>>>> to write those marshallers. But it starts to become a >> >>>>>> nuisance if you want to use your own data objects. >> >>>>>> There is also the ugliness and excessive memory >> footprint of >> >>>>>> the generated code, which is the reason Infinispan did >> not >> >>>>>> adopt the protobuf-java library although it did adopt >> >>>>>> protobuf as an encoding format. >> >>>>>> The Protostream library was created as an alternative >> >>>>>> implementation to solve the aforementioned problems with >> the >> >>>>>> generated code. It solves this by letting the user >> provide >> >>>>>> their own data objects. And for the marshallers it gives >> you >> >>>>>> two options: a) write the marshaller yourself (hated), b) >> >>>>>> annotated your data objects and the marshaller gets >> >>>>>> generated (loved). Protostream does not currently support >> >>>>>> service definitions right now but this is something I >> >>>>>> started to investigate recently after Galder asked me if >> I >> >>>>>> think it's doable. I think I'll only find out after I do >> it:) >> >>>>>> >> >>>>>> Adrian >> >>>>>> >> >>>>>> >> >>>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >> >>>>>>> Hi Infinispan developers, >> >>>>>>> >> >>>>>>> I'm working on a solution for developers who need to >> access >> >>>>>>> Infinispan services through different programming >> languages. >> >>>>>>> >> >>>>>>> The focus is not on developing a full featured client, >> but >> >>>>>>> rather discover the value and the limits of this >> approach. >> >>>>>>> >> >>>>>>> - is it possible to automatically generate useful >> clients >> >>>>>>> in different languages? >> >>>>>>> - can that clients interoperate on the same cache with >> the >> >>>>>>> same data types? >> >>>>>>> >> >>>>>>> I came out with a small prototype that I would like to >> >>>>>>> submit to you and on which I would like to gather your >> >>>>>>> impressions. >> >>>>>>> >> >>>>>>> You can found the project here [1]: is a gRPC-based >> >>>>>>> client/server architecture for Infinispan based on and >> >>>>>>> EmbeddedCache, with very few features exposed atm. >> >>>>>>> >> >>>>>>> Currently the project is nothing more than a poc with >> the >> >>>>>>> following interesting features: >> >>>>>>> >> >>>>>>> - client can be generated in all the grpc supported >> >>>>>>> language: java, go, c++ examples are provided; >> >>>>>>> - the interface is full typed. No need for marshaller >> and >> >>>>>>> clients build in different language can cooperate on the >> >>>>>>> same cache; >> >>>>>>> >> >>>>>>> The second item is my preferred one beacuse it frees the >> >>>>>>> developer from data marshalling. >> >>>>>>> >> >>>>>>> What do you think about? >> >>>>>>> Sounds interesting? >> >>>>>>> Can you see any flaw? >> >>>>>>> >> >>>>>>> There's also a list of issues for the future [2], >> basically >> >>>>>>> I would like to investigate these questions: >> >>>>>>> How far this architecture can go? >> >>>>>>> Topology, events, queries... how many of the Infinispan >> >>>>>>> features can be fit in a grpc architecture? >> >>>>>>> >> >>>>>>> Thank you >> >>>>>>> Vittorio >> >>>>>>> >> >>>>>>> [1] https://github.com/rigazilla/ispn-grpc >> >>>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >> >>>>>>> >> >>>>>>> -- >> >>>>>>> >> >>>>>>> Vittorio Rigamonti >> >>>>>>> >> >>>>>>> Senior Software Engineer >> >>>>>>> >> >>>>>>> Red Hat >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> Milan, Italy >> >>>>>>> >> >>>>>>> vrigamon at redhat.com >> >>>>>>> >> >>>>>>> irc: rigazilla >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> _______________________________________________ >> >>>>>>> infinispan-dev mailing list >> >>>>>>> infinispan-dev at lists.jboss.org >> >>>>>>> >> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> -- >> >>>>>> >> >>>>>> Vittorio Rigamonti >> >>>>>> >> >>>>>> Senior Software Engineer >> >>>>>> >> >>>>>> Red Hat >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> Milan, Italy >> >>>>>> >> >>>>>> vrigamon at redhat.com >> >>>>>> >> >>>>>> irc: rigazilla >> >>>>>> >> >>>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >>>>> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180605/b5e200b3/attachment-0001.html From sanne at infinispan.org Wed Jun 6 06:37:12 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 6 Jun 2018 11:37:12 +0100 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: Thanks Manik! Great to hear some feedback from you, especially as you have way more experience with gRPC. Beyond helping to develop (and maintain!) clients for a wider range of programming languages - it would also help to provide both a "traditional" and a non-blocking client for each such language, while having to maintain just an async server implementation. Sanne On 6 June 2018 at 07:13, Manik Surtani wrote: > Hello everyone! (Wow, it's been a while since I dropped by and said hello... > ) > > Super-interesting discussion. > > Adrian: "What is the approach you envision regarding the IDL? Should we > strive for a pure IDL definition of the service? That could be an > interesting approach that would make it possible for a third party to > generate their own infinispan grpc client in any new language that we do not > already offer support, just based on the IDL. And maybe using a different > grpc implementation if they do not find suitable the one from google." > > This is spot-on, and where I see value in gRPC being incorporated into > Infinispan: making use of open(-ish) standards of RPC communication and > applying that to how we do client/server. Good discussion on handling > payload types in the interface definition. I've made use of some of the > ideas discussed here when creating a proto-defined IDL to look up ... more > serialized proto definitions for given types! Keen to see what your PoC > looks like. > > - M > > > On Wed, 30 May 2018 at 08:19 Galder Zamarreno wrote: >> >> On Wed, May 30, 2018 at 5:00 PM Radim Vansa wrote: >>> >>> On 05/30/2018 02:53 PM, Sanne Grinovero wrote: >>> > On 30 May 2018 at 13:26, Adrian Nistor wrote: >>> >> Yest, the client needs that hash but that does not necessarily mean it >>> >> has to compute it itself. >>> >> The hash should be applied to the storage format which might be >>> >> different from the format the client sees. So hash computation could >>> >> be >>> >> done on the server, just a thought. >>> > Unless we want to explore some form of hybrid gRPC which benefits from >>> > Hot Rod intelligence level 3? >>> >>> Since Tristan said that gRPC is viable only if the performance is >>> comparable - I concluded that this involves the smart routing. I was >>> hoping that gRPC networking layer would provide some hook to specify the >>> destination. >> >> >> It does, via SubchannelPicker implementations. It requires key to be sent >> as HTTP header down the stack so that the SubchannelPicker can extract it. >> >> SubchannelPicker impl can then apply hash on it and decide based on >> available channels. >> >>> >>> An alternative would be a proxy hosted on the same node >>> that would do the routing. >>> >>> >>> If we're to replace Hot Rod I was expecting the (generated) gRPC client >>> to be extensible enough to allow us add client-side features (like near >>> cache, maybe listeners would need client-side code too) but saving us >>> most of the hassle with networking and parsing, while providing basic >>> client in languages we don't embrace without additional cost. >>> >>> R. >>> >>> > >>> > In which case the client will need to compute the hash before it can >>> > hint the network layer were to connect to. >>> > >>> > Thanks, >>> > Sanne >>> > >>> >> On 05/30/2018 02:47 PM, Radim Vansa wrote: >>> >>> On 05/30/2018 12:46 PM, Adrian Nistor wrote: >>> >>>> Thanks for clarifying this Galder. >>> >>>> Yes, the network layer is indeed the culprit and the purpose of this >>> >>>> experiment. >>> >>>> >>> >>>> What is the approach you envision regarding the IDL? Should we >>> >>>> strive >>> >>>> for a pure IDL definition of the service? That could be an >>> >>>> interesting >>> >>>> approach that would make it possible for a third party to generate >>> >>>> their own infinispan grpc client in any new language that we do not >>> >>>> already offer support, just based on the IDL. And maybe using a >>> >>>> different grpc implementation if they do not find suitable the one >>> >>>> from google. >>> >>>> >>> >>>> I was not suggesting we should do type transformation or anything on >>> >>>> the client side that would require an extra layer of code on top of >>> >>>> what grpc generates for the client, so maybe a pure IDL based >>> >>>> service >>> >>>> definition would indeed be possible, without extra helpers. No type >>> >>>> transformation, just type information. Exposing the type info that >>> >>>> comes from the server would be enough, a lot better than dumbing >>> >>>> everything down to a byte[]. >>> >>> I may be wrong but key transformation on client is necessary for >>> >>> correct >>> >>> hash-aware routing, isn't it? We need to get byte array for each key >>> >>> and >>> >>> apply murmur hash there (IIUC even when we use protobuf as the >>> >>> storage >>> >>> format, segment is based on the raw protobuf bytes, right?). >>> >>> >>> >>> Radim >>> >>> >>> >>>> Adrian >>> >>>> >>> >>>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: >>> >>>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >> >>>>> > wrote: >>> >>>>> >>> >>>>> Vittorio, a few remarks regarding your statement "...The >>> >>>>> alternative to this is to develop a protostream equivalent >>> >>>>> for >>> >>>>> each supported language and it doesn't seem really feasible >>> >>>>> to me." >>> >>>>> >>> >>>>> No way! That's a big misunderstanding. We do not need to >>> >>>>> re-implement the protostream library in C/C++/C# or any new >>> >>>>> supported language. >>> >>>>> Protostream is just for Java and it is compatible with >>> >>>>> Google's >>> >>>>> protobuf lib we already use in the other clients. We can >>> >>>>> continue >>> >>>>> using Google's protobuf lib for these clients, with or >>> >>>>> without gRPC. >>> >>>>> Protostream does not handle protobuf services as gRPC does, >>> >>>>> but >>> >>>>> we can add support for that with little effort. >>> >>>>> >>> >>>>> The real problem here is if we want to replace our hot rod >>> >>>>> invocation protocol with gRPC to save on the effort of >>> >>>>> implementing and maintaining hot rod in all those clients. I >>> >>>>> wonder why the obvious question is being avoided in this >>> >>>>> thread. >>> >>>>> >>> >>>>> >>> >>>>> ^ It is not being avoided. I stated it quite clearly when I replied >>> >>>>> but maybe not with enough detail. So, I said: >>> >>>>> >>> >>>>>> The biggest problem I see in our client/server architecture is >>> >>>>>> the >>> >>>>> ability to quickly deliver features/APIs across multiple language >>> >>>>> clients. Both Vittorio and I have seen how long it takes to >>> >>>>> implement >>> >>>>> all the different features available in Java client and port them >>> >>>>> to >>> >>>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to >>> >>>>> improve on that by having some of that work done for us. Granted, >>> >>>>> not >>> >>>>> all of it will be done, but it should give us some good foundations >>> >>>>> on which to build. >>> >>>>> >>> >>>>> To expand on it a bit further: the reason it takes us longer to get >>> >>>>> different features in is because each client implements its own >>> >>>>> network layer, parses the protocol and does type transformations >>> >>>>> (between byte[] and whatever the client expects). >>> >>>>> >>> >>>>> IMO, the most costly things there are getting the network layer >>> >>>>> right >>> >>>>> (from experience with Node.js, it has taken a while to do so) and >>> >>>>> parsing work (not only parsing itself, but doing it in a efficient >>> >>>>> way). Network layer also includes load balancing, failover, cluster >>> >>>>> failover...etc. >>> >>>>> >>> >>>>> From past experience, transforming from byte[] to what the client >>> >>>>> expects has never really been very problematic for me. What's been >>> >>>>> difficult here is coming up with encoding architecture that Gustavo >>> >>>>> lead, whose aim was to improve on the initial compatibility mode. >>> >>>>> But, with that now clear, understood and proven to solve our >>> >>>>> issues, >>> >>>>> the rest in this area should be fairly straightforward IMO. >>> >>>>> >>> >>>>> Type transformation, once done, is a constant. As we add more Hot >>> >>>>> Rod >>> >>>>> operations, it's mostly the parsing that starts to become more >>> >>>>> work. >>> >>>>> Network can also become more work if instead of RPC commands you >>> >>>>> start supporting streams based commands. >>> >>>>> >>> >>>>> gRPC solves the network (FYI: with key as HTTP header and >>> >>>>> SubchannelPicker you can do hash-aware routing) and parsing for us. >>> >>>>> I >>> >>>>> don't see the need for it to solve our type transformations for us. >>> >>>>> If it does it, great, but does it support our compatibility >>> >>>>> requirements? (I had already told Vittorio to check Gustavo on >>> >>>>> this). >>> >>>>> Type transformation is a lower prio for me, network and parsing are >>> >>>>> more important. >>> >>>>> >>> >>>>> Hope this clarifies better my POV. >>> >>>>> >>> >>>>> Cheers >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> Adrian >>> >>>>> >>> >>>>> >>> >>>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: >>> >>>>>> Thanks Adrian, >>> >>>>>> >>> >>>>>> of course there's a marshalling work under the cover and >>> >>>>>> that is >>> >>>>>> reflected into the generated code (specially the accessor >>> >>>>>> methods generated from the oneof clause). >>> >>>>>> >>> >>>>>> My opinion is that on the client side this could be >>> >>>>>> accepted, as >>> >>>>>> long as the API are well defined and documented: application >>> >>>>>> developer can build an adhoc decorator on the top if needed. >>> >>>>>> The >>> >>>>>> alternative to this is to develop a protostream equivalent >>> >>>>>> for >>> >>>>>> each supported language and it doesn't seem really feasible >>> >>>>>> to me. >>> >>>>>> >>> >>>>>> On the server side (java only) the situation is different: >>> >>>>>> protobuf is optimized for streaming not for storing so >>> >>>>>> probably >>> >>>>>> a Protostream layer is needed. >>> >>>>>> >>> >>>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor >>> >>>>>> > wrote: >>> >>>>>> >>> >>>>>> Hi Vittorio, >>> >>>>>> thanks for exploring gRPC. It seems like a very elegant >>> >>>>>> solution for exposing services. I'll have a look at your >>> >>>>>> PoC >>> >>>>>> soon. >>> >>>>>> >>> >>>>>> I feel there are some remarks that need to be made >>> >>>>>> regarding >>> >>>>>> gRPC. gRPC is just some nice cheesy topping on top of >>> >>>>>> protobuf. Google's implementation of protobuf, to be >>> >>>>>> more >>> >>>>>> precise. >>> >>>>>> It does not need handwritten marshallers, but the 'No >>> >>>>>> need >>> >>>>>> for marshaller' does not accurately describe it. >>> >>>>>> Marshallers >>> >>>>>> are needed and are generated under the cover by the >>> >>>>>> library >>> >>>>>> and so are the data objects and you are unfortunately >>> >>>>>> forced >>> >>>>>> to use them. That's both the good news and the bad >>> >>>>>> news:) >>> >>>>>> The whole thing looks very promising and friendly for >>> >>>>>> many >>> >>>>>> uses cases, especially for demos and PoCs :))). Nobody >>> >>>>>> wants >>> >>>>>> to write those marshallers. But it starts to become a >>> >>>>>> nuisance if you want to use your own data objects. >>> >>>>>> There is also the ugliness and excessive memory >>> >>>>>> footprint of >>> >>>>>> the generated code, which is the reason Infinispan did >>> >>>>>> not >>> >>>>>> adopt the protobuf-java library although it did adopt >>> >>>>>> protobuf as an encoding format. >>> >>>>>> The Protostream library was created as an alternative >>> >>>>>> implementation to solve the aforementioned problems with >>> >>>>>> the >>> >>>>>> generated code. It solves this by letting the user >>> >>>>>> provide >>> >>>>>> their own data objects. And for the marshallers it gives >>> >>>>>> you >>> >>>>>> two options: a) write the marshaller yourself (hated), >>> >>>>>> b) >>> >>>>>> annotated your data objects and the marshaller gets >>> >>>>>> generated (loved). Protostream does not currently >>> >>>>>> support >>> >>>>>> service definitions right now but this is something I >>> >>>>>> started to investigate recently after Galder asked me if >>> >>>>>> I >>> >>>>>> think it's doable. I think I'll only find out after I do >>> >>>>>> it:) >>> >>>>>> >>> >>>>>> Adrian >>> >>>>>> >>> >>>>>> >>> >>>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: >>> >>>>>>> Hi Infinispan developers, >>> >>>>>>> >>> >>>>>>> I'm working on a solution for developers who need to >>> >>>>>>> access >>> >>>>>>> Infinispan services through different programming >>> >>>>>>> languages. >>> >>>>>>> >>> >>>>>>> The focus is not on developing a full featured client, >>> >>>>>>> but >>> >>>>>>> rather discover the value and the limits of this >>> >>>>>>> approach. >>> >>>>>>> >>> >>>>>>> - is it possible to automatically generate useful >>> >>>>>>> clients >>> >>>>>>> in different languages? >>> >>>>>>> - can that clients interoperate on the same cache with >>> >>>>>>> the >>> >>>>>>> same data types? >>> >>>>>>> >>> >>>>>>> I came out with a small prototype that I would like to >>> >>>>>>> submit to you and on which I would like to gather your >>> >>>>>>> impressions. >>> >>>>>>> >>> >>>>>>> You can found the project here [1]: is a gRPC-based >>> >>>>>>> client/server architecture for Infinispan based on and >>> >>>>>>> EmbeddedCache, with very few features exposed atm. >>> >>>>>>> >>> >>>>>>> Currently the project is nothing more than a poc with >>> >>>>>>> the >>> >>>>>>> following interesting features: >>> >>>>>>> >>> >>>>>>> - client can be generated in all the grpc supported >>> >>>>>>> language: java, go, c++ examples are provided; >>> >>>>>>> - the interface is full typed. No need for marshaller >>> >>>>>>> and >>> >>>>>>> clients build in different language can cooperate on >>> >>>>>>> the >>> >>>>>>> same cache; >>> >>>>>>> >>> >>>>>>> The second item is my preferred one beacuse it frees >>> >>>>>>> the >>> >>>>>>> developer from data marshalling. >>> >>>>>>> >>> >>>>>>> What do you think about? >>> >>>>>>> Sounds interesting? >>> >>>>>>> Can you see any flaw? >>> >>>>>>> >>> >>>>>>> There's also a list of issues for the future [2], >>> >>>>>>> basically >>> >>>>>>> I would like to investigate these questions: >>> >>>>>>> How far this architecture can go? >>> >>>>>>> Topology, events, queries... how many of the Infinispan >>> >>>>>>> features can be fit in a grpc architecture? >>> >>>>>>> >>> >>>>>>> Thank you >>> >>>>>>> Vittorio >>> >>>>>>> >>> >>>>>>> [1] https://github.com/rigazilla/ispn-grpc >>> >>>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> >>> >>>>>>> Vittorio Rigamonti >>> >>>>>>> >>> >>>>>>> Senior Software Engineer >>> >>>>>>> >>> >>>>>>> Red Hat >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> Milan, Italy >>> >>>>>>> >>> >>>>>>> vrigamon at redhat.com >>> >>>>>>> >>> >>>>>>> irc: rigazilla >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> _______________________________________________ >>> >>>>>>> infinispan-dev mailing list >>> >>>>>>> infinispan-dev at lists.jboss.org >>> >>>>>>> >>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> -- >>> >>>>>> >>> >>>>>> Vittorio Rigamonti >>> >>>>>> >>> >>>>>> Senior Software Engineer >>> >>>>>> >>> >>>>>> Red Hat >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> Milan, Italy >>> >>>>>> >>> >>>>>> vrigamon at redhat.com >>> >>>>>> >>> >>>>>> irc: rigazilla >>> >>>>>> >>> >>>>>> >>> >>>>> _______________________________________________ >>> >>>>> infinispan-dev mailing list >>> >>>>> infinispan-dev at lists.jboss.org >>> >>>>> >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> _______________________________________________ >>> >>>>> infinispan-dev mailing list >>> >>>>> infinispan-dev at lists.jboss.org >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>> >>> >>>> >>> >>>> _______________________________________________ >>> >>>> infinispan-dev mailing list >>> >>>> infinispan-dev at lists.jboss.org >>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >>> >> infinispan-dev mailing list >>> >> infinispan-dev at lists.jboss.org >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed Jun 6 07:47:41 2018 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 6 Jun 2018 13:47:41 +0200 Subject: [infinispan-dev] Infinispan client/server architecture based on gRPC In-Reply-To: References: <7b326b10-860d-4439-91b6-2b381e88af8f@redhat.com> <7ec7ec3e-5428-81c4-fa79-3ecb886ebec3@redhat.com> <77f344ca-be15-cd9c-0b5a-255fe9244e2a@redhat.com> <4b1c7cc3-455c-c333-4957-06a646258cdc@redhat.com> <04284c5a-1c73-e587-09e0-59b9fb053ef7@redhat.com> Message-ID: @Manik, great to hear from you! I also agree that gRPC brings a lot of benefits to our client/server architecture. In fact, I'd personally be happy to trade off "some" performance for all the benefits it brings. @Adrian I think you hit the nail with this: "...make it possible for a third party to generate their own infinispan grpc client in any new language that we do not already offer support...". We didn't consider this in 2010 when we first thought of using Google Protobuf to define Hot Rod protocol: http://lists.jboss.org/pipermail/infinispan-dev/2010-January/004936.html That discussion ended with: > Manik and I were discussing this last week and came to the conclusion that as suggested by David on his 1st paragraph, using it would be tying us up to protobufs and its limitations, including the lack of support for other languages such as C#. That wasn't the only reason but it was one of the reasons. As Adrian rightly points out, we could just have gone with it and then implement a missing lang ourselves. That wasn't the only reason though, as I said in the Javaland presentation on the topic, back in 2010, Google didn't have much street credibility with open source libraries. No one knew what would happen to Protobuf, whether it'd be open sourced and left to die... or would evolve. Hindsight is a wonderful thing ;) @Sanne, I've forgotten about that but yes! That's a very nice added feature too. You can decide whether you have a sync or async client on the spot. Both are generated. Also the "stream" keyboard for streaming multiple elements is a nice feature for things like getAll Cheers Galder On Wed, Jun 6, 2018 at 12:44 PM Sanne Grinovero wrote: > Thanks Manik! Great to hear some feedback from you, especially as you > have way more experience with gRPC. > > Beyond helping to develop (and maintain!) clients for a wider range of > programming languages - it would also help to provide both a > "traditional" and a non-blocking client for each such language, while > having to maintain just an async server implementation. > > Sanne > > On 6 June 2018 at 07:13, Manik Surtani wrote: > > Hello everyone! (Wow, it's been a while since I dropped by and said > hello... > > ) > > > > Super-interesting discussion. > > > > Adrian: "What is the approach you envision regarding the IDL? Should we > > strive for a pure IDL definition of the service? That could be an > > interesting approach that would make it possible for a third party to > > generate their own infinispan grpc client in any new language that we do > not > > already offer support, just based on the IDL. And maybe using a different > > grpc implementation if they do not find suitable the one from google." > > > > This is spot-on, and where I see value in gRPC being incorporated into > > Infinispan: making use of open(-ish) standards of RPC communication and > > applying that to how we do client/server. Good discussion on handling > > payload types in the interface definition. I've made use of some of the > > ideas discussed here when creating a proto-defined IDL to look up ... > more > > serialized proto definitions for given types! Keen to see what your PoC > > looks like. > > > > - M > > > > > > On Wed, 30 May 2018 at 08:19 Galder Zamarreno wrote: > >> > >> On Wed, May 30, 2018 at 5:00 PM Radim Vansa wrote: > >>> > >>> On 05/30/2018 02:53 PM, Sanne Grinovero wrote: > >>> > On 30 May 2018 at 13:26, Adrian Nistor wrote: > >>> >> Yest, the client needs that hash but that does not necessarily mean > it > >>> >> has to compute it itself. > >>> >> The hash should be applied to the storage format which might be > >>> >> different from the format the client sees. So hash computation could > >>> >> be > >>> >> done on the server, just a thought. > >>> > Unless we want to explore some form of hybrid gRPC which benefits > from > >>> > Hot Rod intelligence level 3? > >>> > >>> Since Tristan said that gRPC is viable only if the performance is > >>> comparable - I concluded that this involves the smart routing. I was > >>> hoping that gRPC networking layer would provide some hook to specify > the > >>> destination. > >> > >> > >> It does, via SubchannelPicker implementations. It requires key to be > sent > >> as HTTP header down the stack so that the SubchannelPicker can extract > it. > >> > >> SubchannelPicker impl can then apply hash on it and decide based on > >> available channels. > >> > >>> > >>> An alternative would be a proxy hosted on the same node > >>> that would do the routing. > >>> > >>> > >>> If we're to replace Hot Rod I was expecting the (generated) gRPC client > >>> to be extensible enough to allow us add client-side features (like near > >>> cache, maybe listeners would need client-side code too) but saving us > >>> most of the hassle with networking and parsing, while providing basic > >>> client in languages we don't embrace without additional cost. > >>> > >>> R. > >>> > >>> > > >>> > In which case the client will need to compute the hash before it can > >>> > hint the network layer were to connect to. > >>> > > >>> > Thanks, > >>> > Sanne > >>> > > >>> >> On 05/30/2018 02:47 PM, Radim Vansa wrote: > >>> >>> On 05/30/2018 12:46 PM, Adrian Nistor wrote: > >>> >>>> Thanks for clarifying this Galder. > >>> >>>> Yes, the network layer is indeed the culprit and the purpose of > this > >>> >>>> experiment. > >>> >>>> > >>> >>>> What is the approach you envision regarding the IDL? Should we > >>> >>>> strive > >>> >>>> for a pure IDL definition of the service? That could be an > >>> >>>> interesting > >>> >>>> approach that would make it possible for a third party to generate > >>> >>>> their own infinispan grpc client in any new language that we do > not > >>> >>>> already offer support, just based on the IDL. And maybe using a > >>> >>>> different grpc implementation if they do not find suitable the one > >>> >>>> from google. > >>> >>>> > >>> >>>> I was not suggesting we should do type transformation or anything > on > >>> >>>> the client side that would require an extra layer of code on top > of > >>> >>>> what grpc generates for the client, so maybe a pure IDL based > >>> >>>> service > >>> >>>> definition would indeed be possible, without extra helpers. No > type > >>> >>>> transformation, just type information. Exposing the type info that > >>> >>>> comes from the server would be enough, a lot better than dumbing > >>> >>>> everything down to a byte[]. > >>> >>> I may be wrong but key transformation on client is necessary for > >>> >>> correct > >>> >>> hash-aware routing, isn't it? We need to get byte array for each > key > >>> >>> and > >>> >>> apply murmur hash there (IIUC even when we use protobuf as the > >>> >>> storage > >>> >>> format, segment is based on the raw protobuf bytes, right?). > >>> >>> > >>> >>> Radim > >>> >>> > >>> >>>> Adrian > >>> >>>> > >>> >>>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote: > >>> >>>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor < > anistor at redhat.com > >>> >>>>> > wrote: > >>> >>>>> > >>> >>>>> Vittorio, a few remarks regarding your statement "...The > >>> >>>>> alternative to this is to develop a protostream equivalent > >>> >>>>> for > >>> >>>>> each supported language and it doesn't seem really feasible > >>> >>>>> to me." > >>> >>>>> > >>> >>>>> No way! That's a big misunderstanding. We do not need to > >>> >>>>> re-implement the protostream library in C/C++/C# or any new > >>> >>>>> supported language. > >>> >>>>> Protostream is just for Java and it is compatible with > >>> >>>>> Google's > >>> >>>>> protobuf lib we already use in the other clients. We can > >>> >>>>> continue > >>> >>>>> using Google's protobuf lib for these clients, with or > >>> >>>>> without gRPC. > >>> >>>>> Protostream does not handle protobuf services as gRPC does, > >>> >>>>> but > >>> >>>>> we can add support for that with little effort. > >>> >>>>> > >>> >>>>> The real problem here is if we want to replace our hot rod > >>> >>>>> invocation protocol with gRPC to save on the effort of > >>> >>>>> implementing and maintaining hot rod in all those clients. > I > >>> >>>>> wonder why the obvious question is being avoided in this > >>> >>>>> thread. > >>> >>>>> > >>> >>>>> > >>> >>>>> ^ It is not being avoided. I stated it quite clearly when I > replied > >>> >>>>> but maybe not with enough detail. So, I said: > >>> >>>>> > >>> >>>>>> The biggest problem I see in our client/server architecture > is > >>> >>>>>> the > >>> >>>>> ability to quickly deliver features/APIs across multiple language > >>> >>>>> clients. Both Vittorio and I have seen how long it takes to > >>> >>>>> implement > >>> >>>>> all the different features available in Java client and port them > >>> >>>>> to > >>> >>>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying > to > >>> >>>>> improve on that by having some of that work done for us. Granted, > >>> >>>>> not > >>> >>>>> all of it will be done, but it should give us some good > foundations > >>> >>>>> on which to build. > >>> >>>>> > >>> >>>>> To expand on it a bit further: the reason it takes us longer to > get > >>> >>>>> different features in is because each client implements its own > >>> >>>>> network layer, parses the protocol and does type transformations > >>> >>>>> (between byte[] and whatever the client expects). > >>> >>>>> > >>> >>>>> IMO, the most costly things there are getting the network layer > >>> >>>>> right > >>> >>>>> (from experience with Node.js, it has taken a while to do so) and > >>> >>>>> parsing work (not only parsing itself, but doing it in a > efficient > >>> >>>>> way). Network layer also includes load balancing, failover, > cluster > >>> >>>>> failover...etc. > >>> >>>>> > >>> >>>>> From past experience, transforming from byte[] to what the > client > >>> >>>>> expects has never really been very problematic for me. What's > been > >>> >>>>> difficult here is coming up with encoding architecture that > Gustavo > >>> >>>>> lead, whose aim was to improve on the initial compatibility mode. > >>> >>>>> But, with that now clear, understood and proven to solve our > >>> >>>>> issues, > >>> >>>>> the rest in this area should be fairly straightforward IMO. > >>> >>>>> > >>> >>>>> Type transformation, once done, is a constant. As we add more Hot > >>> >>>>> Rod > >>> >>>>> operations, it's mostly the parsing that starts to become more > >>> >>>>> work. > >>> >>>>> Network can also become more work if instead of RPC commands you > >>> >>>>> start supporting streams based commands. > >>> >>>>> > >>> >>>>> gRPC solves the network (FYI: with key as HTTP header and > >>> >>>>> SubchannelPicker you can do hash-aware routing) and parsing for > us. > >>> >>>>> I > >>> >>>>> don't see the need for it to solve our type transformations for > us. > >>> >>>>> If it does it, great, but does it support our compatibility > >>> >>>>> requirements? (I had already told Vittorio to check Gustavo on > >>> >>>>> this). > >>> >>>>> Type transformation is a lower prio for me, network and parsing > are > >>> >>>>> more important. > >>> >>>>> > >>> >>>>> Hope this clarifies better my POV. > >>> >>>>> > >>> >>>>> Cheers > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> Adrian > >>> >>>>> > >>> >>>>> > >>> >>>>> On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote: > >>> >>>>>> Thanks Adrian, > >>> >>>>>> > >>> >>>>>> of course there's a marshalling work under the cover and > >>> >>>>>> that is > >>> >>>>>> reflected into the generated code (specially the accessor > >>> >>>>>> methods generated from the oneof clause). > >>> >>>>>> > >>> >>>>>> My opinion is that on the client side this could be > >>> >>>>>> accepted, as > >>> >>>>>> long as the API are well defined and documented: > application > >>> >>>>>> developer can build an adhoc decorator on the top if > needed. > >>> >>>>>> The > >>> >>>>>> alternative to this is to develop a protostream equivalent > >>> >>>>>> for > >>> >>>>>> each supported language and it doesn't seem really > feasible > >>> >>>>>> to me. > >>> >>>>>> > >>> >>>>>> On the server side (java only) the situation is different: > >>> >>>>>> protobuf is optimized for streaming not for storing so > >>> >>>>>> probably > >>> >>>>>> a Protostream layer is needed. > >>> >>>>>> > >>> >>>>>> On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor > >>> >>>>>> > wrote: > >>> >>>>>> > >>> >>>>>> Hi Vittorio, > >>> >>>>>> thanks for exploring gRPC. It seems like a very > elegant > >>> >>>>>> solution for exposing services. I'll have a look at > your > >>> >>>>>> PoC > >>> >>>>>> soon. > >>> >>>>>> > >>> >>>>>> I feel there are some remarks that need to be made > >>> >>>>>> regarding > >>> >>>>>> gRPC. gRPC is just some nice cheesy topping on top of > >>> >>>>>> protobuf. Google's implementation of protobuf, to be > >>> >>>>>> more > >>> >>>>>> precise. > >>> >>>>>> It does not need handwritten marshallers, but the 'No > >>> >>>>>> need > >>> >>>>>> for marshaller' does not accurately describe it. > >>> >>>>>> Marshallers > >>> >>>>>> are needed and are generated under the cover by the > >>> >>>>>> library > >>> >>>>>> and so are the data objects and you are unfortunately > >>> >>>>>> forced > >>> >>>>>> to use them. That's both the good news and the bad > >>> >>>>>> news:) > >>> >>>>>> The whole thing looks very promising and friendly for > >>> >>>>>> many > >>> >>>>>> uses cases, especially for demos and PoCs :))). Nobody > >>> >>>>>> wants > >>> >>>>>> to write those marshallers. But it starts to become a > >>> >>>>>> nuisance if you want to use your own data objects. > >>> >>>>>> There is also the ugliness and excessive memory > >>> >>>>>> footprint of > >>> >>>>>> the generated code, which is the reason Infinispan did > >>> >>>>>> not > >>> >>>>>> adopt the protobuf-java library although it did adopt > >>> >>>>>> protobuf as an encoding format. > >>> >>>>>> The Protostream library was created as an alternative > >>> >>>>>> implementation to solve the aforementioned problems > with > >>> >>>>>> the > >>> >>>>>> generated code. It solves this by letting the user > >>> >>>>>> provide > >>> >>>>>> their own data objects. And for the marshallers it > gives > >>> >>>>>> you > >>> >>>>>> two options: a) write the marshaller yourself (hated), > >>> >>>>>> b) > >>> >>>>>> annotated your data objects and the marshaller gets > >>> >>>>>> generated (loved). Protostream does not currently > >>> >>>>>> support > >>> >>>>>> service definitions right now but this is something I > >>> >>>>>> started to investigate recently after Galder asked me > if > >>> >>>>>> I > >>> >>>>>> think it's doable. I think I'll only find out after I > do > >>> >>>>>> it:) > >>> >>>>>> > >>> >>>>>> Adrian > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote: > >>> >>>>>>> Hi Infinispan developers, > >>> >>>>>>> > >>> >>>>>>> I'm working on a solution for developers who need to > >>> >>>>>>> access > >>> >>>>>>> Infinispan services through different programming > >>> >>>>>>> languages. > >>> >>>>>>> > >>> >>>>>>> The focus is not on developing a full featured > client, > >>> >>>>>>> but > >>> >>>>>>> rather discover the value and the limits of this > >>> >>>>>>> approach. > >>> >>>>>>> > >>> >>>>>>> - is it possible to automatically generate useful > >>> >>>>>>> clients > >>> >>>>>>> in different languages? > >>> >>>>>>> - can that clients interoperate on the same cache > with > >>> >>>>>>> the > >>> >>>>>>> same data types? > >>> >>>>>>> > >>> >>>>>>> I came out with a small prototype that I would like > to > >>> >>>>>>> submit to you and on which I would like to gather > your > >>> >>>>>>> impressions. > >>> >>>>>>> > >>> >>>>>>> You can found the project here [1]: is a gRPC-based > >>> >>>>>>> client/server architecture for Infinispan based on > and > >>> >>>>>>> EmbeddedCache, with very few features exposed atm. > >>> >>>>>>> > >>> >>>>>>> Currently the project is nothing more than a poc with > >>> >>>>>>> the > >>> >>>>>>> following interesting features: > >>> >>>>>>> > >>> >>>>>>> - client can be generated in all the grpc supported > >>> >>>>>>> language: java, go, c++ examples are provided; > >>> >>>>>>> - the interface is full typed. No need for marshaller > >>> >>>>>>> and > >>> >>>>>>> clients build in different language can cooperate on > >>> >>>>>>> the > >>> >>>>>>> same cache; > >>> >>>>>>> > >>> >>>>>>> The second item is my preferred one beacuse it frees > >>> >>>>>>> the > >>> >>>>>>> developer from data marshalling. > >>> >>>>>>> > >>> >>>>>>> What do you think about? > >>> >>>>>>> Sounds interesting? > >>> >>>>>>> Can you see any flaw? > >>> >>>>>>> > >>> >>>>>>> There's also a list of issues for the future [2], > >>> >>>>>>> basically > >>> >>>>>>> I would like to investigate these questions: > >>> >>>>>>> How far this architecture can go? > >>> >>>>>>> Topology, events, queries... how many of the > Infinispan > >>> >>>>>>> features can be fit in a grpc architecture? > >>> >>>>>>> > >>> >>>>>>> Thank you > >>> >>>>>>> Vittorio > >>> >>>>>>> > >>> >>>>>>> [1] https://github.com/rigazilla/ispn-grpc > >>> >>>>>>> [2] https://github.com/rigazilla/ispn-grpc/issues > >>> >>>>>>> > >>> >>>>>>> -- > >>> >>>>>>> > >>> >>>>>>> Vittorio Rigamonti > >>> >>>>>>> > >>> >>>>>>> Senior Software Engineer > >>> >>>>>>> > >>> >>>>>>> Red Hat > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> Milan, Italy > >>> >>>>>>> > >>> >>>>>>> vrigamon at redhat.com > >>> >>>>>>> > >>> >>>>>>> irc: rigazilla > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> _______________________________________________ > >>> >>>>>>> infinispan-dev mailing list > >>> >>>>>>> infinispan-dev at lists.jboss.org > >>> >>>>>>> > >>> >>>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> -- > >>> >>>>>> > >>> >>>>>> Vittorio Rigamonti > >>> >>>>>> > >>> >>>>>> Senior Software Engineer > >>> >>>>>> > >>> >>>>>> Red Hat > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> Milan, Italy > >>> >>>>>> > >>> >>>>>> vrigamon at redhat.com > >>> >>>>>> > >>> >>>>>> irc: rigazilla > >>> >>>>>> > >>> >>>>>> > >>> >>>>> _______________________________________________ > >>> >>>>> infinispan-dev mailing list > >>> >>>>> infinispan-dev at lists.jboss.org > >>> >>>>> > >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> _______________________________________________ > >>> >>>>> infinispan-dev mailing list > >>> >>>>> infinispan-dev at lists.jboss.org > >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> >>>> > >>> >>>> > >>> >>>> _______________________________________________ > >>> >>>> infinispan-dev mailing list > >>> >>>> infinispan-dev at lists.jboss.org > >>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> >> _______________________________________________ > >>> >> infinispan-dev mailing list > >>> >> infinispan-dev at lists.jboss.org > >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >>> > >>> -- > >>> Radim Vansa > >>> JBoss Performance Team > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180606/78fe88f2/attachment-0001.html From mudokonman at gmail.com Mon Jun 11 14:05:40 2018 From: mudokonman at gmail.com (William Burns) Date: Mon, 11 Jun 2018 14:05:40 -0400 Subject: [infinispan-dev] Infinispan 9.2.5.Final is available Message-ID: Dear all, We have a new, very minor release available, 9.2.5.Final. This release contains only a single fix for the Infinispan Server [1]. You can download it at the usual place [2]. [1] https://issues.jboss.org/browse/ISPN-9281 [2] http://infinispan.org/download/ Will -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180611/76800daa/attachment.html From rvansa at redhat.com Mon Jun 25 11:51:05 2018 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 25 Jun 2018 17:51:05 +0200 Subject: [infinispan-dev] ISPN-6478 In-Reply-To: References: <1035299360.30068312.1529929775680.JavaMail.zimbra@redhat.com> <832e66b2-b81b-6b1a-8201-f4a6360fb977@redhat.com> Message-ID: <351d1886-79b6-e569-f4e6-dfb81fdcc627@redhat.com> Adding -dev list as we're discussing a new feature. On 06/25/2018 05:20 PM, Galder Zamarreno wrote: > The question is more about the potential problem of filling up the > event queue: > > First, which thread should be the one to queue up the event? Netty > thread or thread pool? > > Second, if the queue is full, the thread hangs waiting for it to > become non-full. Should this be the case? If this is the Netty thread, > it's probably quite bad. If it's from the remote thread pool, maybe > not as bad but still bad. Agreed, blocking is bad, always. I don't believe that we should propagate the backpressure from slow clients (receiving events) to fast clients doing the updates. The only thing about slow clients we should do is reporting that we have some trouble with them (monitoring, logs...). > > Regardless, If the event queue is full, we need to do something > different. Dropping the connection doesn't sound too bad. Our > guarantees are at least once, so we can get it again on reconnect with > includeCurrentState. > > We could signal such an event with a different event, just like we do > with?ClientCacheFailoverEvent, so that with this new event, you know > you've been disconnected because events are not being consumed fast > enough? Yep, but we can't start sending the event to newer clients even if we increase the protocol version; we could send such event only if the client is explicitly listening for that and in the other case we should still drop the connection. Radim > > Cheers > > On Mon, Jun 25, 2018 at 2:47 PM Radim Vansa > wrote: > > You mean that the writes are executed by the Netty thread instead of > remote-thread-pool thread? Ok, I think that the issue is still valid. > > Since there's no event truncation, when the queue is over limit > all we > can do is probably dropping the connection. The client will reconnect > (if it's still alive) and it will ask for current state (btw. will > the > user know that it has lost some events when it reconnects on a > includeCurrentState=false listener? I don't think so... we might just > drop the events anyway then). The current state will be sent in > another > threadpool, and as an improvement we could somehow reject the > listener > registration if we're over limit on other connections. > > Radim > > On 06/25/2018 02:29 PM, William Burns wrote: > > Hrmm, I don't know for sure. Radim changed how the events all > work now, so it might be fixed. > > > >? ?- Will > > > > ----- Original Message ----- > >> From: "Galder Zamarreno" > > >> To: "William Burns" > > >> Sent: Tuesday, 19 June, 2018 6:58:57 AM > >> Subject: ISPN-6478 > >> > >> Hey, > >> > >> Is [1] still an issue for us? > >> > >> Cheers, > >> > >> [1] https://issues.jboss.org/browse/ISPN-6478 > >> > > -- > Radim Vansa > > JBoss Performance Team > -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Mon Jun 25 16:01:35 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 25 Jun 2018 21:01:35 +0100 Subject: [infinispan-dev] Upgrade to JTA 1.2 API? Message-ID: Would it be possible to upgrade - org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final to - org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final ? Thanks! From ancosen1985 at yahoo.com Tue Jun 26 03:37:47 2018 From: ancosen1985 at yahoo.com (Andrea Cosentino) Date: Tue, 26 Jun 2018 07:37:47 +0000 (UTC) Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> Message-ID: <645513825.2400024.1529998667406@mail.yahoo.com> Looking at Maven central http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server/9.3.0.Final/ It looks like the Infinispan-server zip is missing. Is this wanted? Thanks. -- Andrea Cosentino? ---------------------------------- Apache Camel PMC Chair Apache Karaf Committer Apache Servicemix PMC Member Email: ancosen1985 at yahoo.com Twitter: @oscerd2 Github: oscerd From galder at redhat.com Tue Jun 26 04:38:59 2018 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 26 Jun 2018 10:38:59 +0200 Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing In-Reply-To: <645513825.2400024.1529998667406@mail.yahoo.com> References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> <645513825.2400024.1529998667406@mail.yahoo.com> Message-ID: I'm not sure. There has been some changes in that area recently. I'll let others comment. AFAIK, the ZIP available from Maven is not the full server, but a base server which on startup downloads dependencies. In the mean time, the ZIP file should be available from here which is really the full server with all the dependencies. http://downloads.jboss.org/infinispan/9.3.0.Final/infinispan-server-9.3.0.Final.zip Cheers Galder On Tue, Jun 26, 2018 at 9:38 AM Andrea Cosentino wrote: > Looking at Maven central > > > http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server/9.3.0.Final/ > > It looks like the Infinispan-server zip is missing. > > Is this wanted? > > Thanks. > > -- > Andrea Cosentino > ---------------------------------- > Apache Camel PMC Chair > Apache Karaf Committer > Apache Servicemix PMC Member > Email: ancosen1985 at yahoo.com > Twitter: @oscerd2 > Github: oscerd > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180626/22225908/attachment.html From ancosen1985 at yahoo.com Tue Jun 26 04:44:51 2018 From: ancosen1985 at yahoo.com (Andrea Cosentino) Date: Tue, 26 Jun 2018 08:44:51 +0000 (UTC) Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing In-Reply-To: References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> <645513825.2400024.1529998667406@mail.yahoo.com> Message-ID: <866186979.2415892.1530002691594@mail.yahoo.com> I'm asking because in the infinispan kafka connector we use the Zip for integration tests https://github.com/infinispan/infinispan-kafka/blob/master/core/pom.xml#L197 -- Andrea Cosentino? ---------------------------------- Apache Camel PMC Chair Apache Karaf Committer Apache Servicemix PMC Member Email: ancosen1985 at yahoo.com Twitter: @oscerd2 Github: oscerd On Tuesday, June 26, 2018, 10:39:12 AM GMT+2, Galder Zamarreno wrote: I'm not sure. There has been some changes in that area recently. I'll let others comment. AFAIK, the ZIP available from Maven is not the full server, but a base server which on startup downloads dependencies. In the mean time, the ZIP file should be available from here which is really the full server with all the dependencies. http://downloads.jboss.org/infinispan/9.3.0.Final/infinispan-server-9.3.0.Final.zip Cheers Galder On Tue, Jun 26, 2018 at 9:38 AM Andrea Cosentino wrote: > Looking at Maven central > > http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server/9.3.0.Final/ > > It looks like the Infinispan-server zip is missing. > > Is this wanted? > > Thanks. > > -- > Andrea Cosentino? > ---------------------------------- > Apache Camel PMC Chair > Apache Karaf Committer > Apache Servicemix PMC Member > Email: ancosen1985 at yahoo.com > Twitter: @oscerd2 > Github: oscerd > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Jun 26 04:56:23 2018 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 26 Jun 2018 10:56:23 +0200 Subject: [infinispan-dev] Upgrade to JTA 1.2 API? In-Reply-To: References: Message-ID: Are the backwards compatible? Any gotchas we should be aware of? On Mon, Jun 25, 2018 at 10:07 PM Sanne Grinovero wrote: > Would it be possible to upgrade > - > org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final > to > - > org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final > > ? > > Thanks! > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180626/4b5ab641/attachment-0001.html From galder at redhat.com Tue Jun 26 05:02:16 2018 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 26 Jun 2018 11:02:16 +0200 Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing In-Reply-To: <866186979.2415892.1530002691594@mail.yahoo.com> References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> <645513825.2400024.1529998667406@mail.yahoo.com> <866186979.2415892.1530002691594@mail.yahoo.com> Message-ID: Looking around, the server ZIP file on Maven might have moved here: http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.3.0.Final/infinispan-server-build-9.3.0.Final.zip I believe that's the slimmed down version that downloads dependencies on the fly when you first run it. Cheers On Tue, Jun 26, 2018 at 10:45 AM Andrea Cosentino wrote: > I'm asking because in the infinispan kafka connector we use the Zip for > integration tests > > > https://github.com/infinispan/infinispan-kafka/blob/master/core/pom.xml#L197 > > -- > Andrea Cosentino > ---------------------------------- > Apache Camel PMC Chair > Apache Karaf Committer > Apache Servicemix PMC Member > Email: ancosen1985 at yahoo.com > Twitter: @oscerd2 > Github: oscerd > > > > > > > On Tuesday, June 26, 2018, 10:39:12 AM GMT+2, Galder Zamarreno < > galder at redhat.com> wrote: > > > > > > I'm not sure. There has been some changes in that area recently. I'll let > others comment. > > AFAIK, the ZIP available from Maven is not the full server, but a base > server which on startup downloads dependencies. > > In the mean time, the ZIP file should be available from here which is > really the full server with all the dependencies. > > http://downloads.jboss.org/infinispan/9.3.0.Final/infinispan-server-9.3.0.Final.zip > > Cheers > Galder > > On Tue, Jun 26, 2018 at 9:38 AM Andrea Cosentino > wrote: > > Looking at Maven central > > > > > http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server/9.3.0.Final/ > > > > It looks like the Infinispan-server zip is missing. > > > > Is this wanted? > > > > Thanks. > > > > -- > > Andrea Cosentino > > ---------------------------------- > > Apache Camel PMC Chair > > Apache Karaf Committer > > Apache Servicemix PMC Member > > Email: ancosen1985 at yahoo.com > > Twitter: @oscerd2 > > Github: oscerd > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180626/ec6cc3b2/attachment.html From sanne at infinispan.org Tue Jun 26 05:08:01 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Jun 2018 10:08:01 +0100 Subject: [infinispan-dev] Upgrade to JTA 1.2 API? In-Reply-To: References: Message-ID: If I knew I would have opened a JIRA and sent a PR ;) The most important point IMO is that other projects such as WildFly and Hibernate are using the 1.2 version since a long time already so Infinispan is effectively testing a version which is not the one we run it with, thus might hide some issue. On Tue, 26 Jun 2018, 09:56 Galder Zamarreno, wrote: > Are the backwards compatible? Any gotchas we should be aware of? > > On Mon, Jun 25, 2018 at 10:07 PM Sanne Grinovero > wrote: > >> Would it be possible to upgrade >> - >> org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final >> to >> - >> org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final >> >> ? >> >> Thanks! >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180626/9d93002e/attachment.html From gustavo at infinispan.org Tue Jun 26 05:10:04 2018 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Tue, 26 Jun 2018 10:10:04 +0100 Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing In-Reply-To: References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> <645513825.2400024.1529998667406@mail.yahoo.com> <866186979.2415892.1530002691594@mail.yahoo.com> Message-ID: >From my experience, the slimmed down server does not work in all environments and it's slower to bootstrap than the full server. If you are looking for using the server zip for testing purposes, I'd recommend using the download-maven-plugin [1] grabbing the server from [2] [1] https://github.com/maven-download-plugin/maven-download-plugin#wget-goal [2] http://downloads.jboss.org/infinispan/9.3.0.Final/infinispan-server-9.3.0.Final.zip Thanks, Gustavo On Tue, Jun 26, 2018 at 10:02 AM, Galder Zamarreno wrote: > Looking around, the server ZIP file on Maven might have moved here: > > http://repo2.maven.org/maven2/org/infinispan/server/ > infinispan-server-build/9.3.0.Final/infinispan-server-build- > 9.3.0.Final.zip > > I believe that's the slimmed down version that downloads dependencies on > the fly when you first run it. > > Cheers > > On Tue, Jun 26, 2018 at 10:45 AM Andrea Cosentino > wrote: > >> I'm asking because in the infinispan kafka connector we use the Zip for >> integration tests >> >> https://github.com/infinispan/infinispan-kafka/blob/master/ >> core/pom.xml#L197 >> >> -- >> Andrea Cosentino >> ---------------------------------- >> Apache Camel PMC Chair >> Apache Karaf Committer >> Apache Servicemix PMC Member >> Email: ancosen1985 at yahoo.com >> Twitter: @oscerd2 >> Github: oscerd >> >> >> >> >> >> >> On Tuesday, June 26, 2018, 10:39:12 AM GMT+2, Galder Zamarreno < >> galder at redhat.com> wrote: >> >> >> >> >> >> I'm not sure. There has been some changes in that area recently. I'll let >> others comment. >> >> AFAIK, the ZIP available from Maven is not the full server, but a base >> server which on startup downloads dependencies. >> >> In the mean time, the ZIP file should be available from here which is >> really the full server with all the dependencies. >> http://downloads.jboss.org/infinispan/9.3.0.Final/ >> infinispan-server-9.3.0.Final.zip >> >> Cheers >> Galder >> >> On Tue, Jun 26, 2018 at 9:38 AM Andrea Cosentino >> wrote: >> > Looking at Maven central >> > >> > http://repo2.maven.org/maven2/org/infinispan/server/ >> infinispan-server/9.3.0.Final/ >> > >> > It looks like the Infinispan-server zip is missing. >> > >> > Is this wanted? >> > >> > Thanks. >> > >> > -- >> > Andrea Cosentino >> > ---------------------------------- >> > Apache Camel PMC Chair >> > Apache Karaf Committer >> > Apache Servicemix PMC Member >> > Email: ancosen1985 at yahoo.com >> > Twitter: @oscerd2 >> > Github: oscerd >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180626/b65033c8/attachment-0001.html From ancosen1985 at yahoo.com Tue Jun 26 05:10:20 2018 From: ancosen1985 at yahoo.com (Andrea Cosentino) Date: Tue, 26 Jun 2018 09:10:20 +0000 (UTC) Subject: [infinispan-dev] Infinispan Server 9.3.0.Final Zip is missing In-Reply-To: References: <645513825.2400024.1529998667406.ref@mail.yahoo.com> <645513825.2400024.1529998667406@mail.yahoo.com> <866186979.2415892.1530002691594@mail.yahoo.com> Message-ID: <654769977.2411444.1530004220573@mail.yahoo.com> Thanks! Sorry for the noise. -- Andrea Cosentino? ---------------------------------- Apache Camel PMC Chair Apache Karaf Committer Apache Servicemix PMC Member Email: ancosen1985 at yahoo.com Twitter: @oscerd2 Github: oscerd On Tuesday, June 26, 2018, 11:02:31 AM GMT+2, Galder Zamarreno wrote: Looking around, the server ZIP file on Maven might have moved here: http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.3.0.Final/infinispan-server-build-9.3.0.Final.zip I believe that's the slimmed down version that downloads dependencies on the fly when you first run it. Cheers On Tue, Jun 26, 2018 at 10:45 AM Andrea Cosentino wrote: > I'm asking because in the infinispan kafka connector we use the Zip for integration tests > > https://github.com/infinispan/infinispan-kafka/blob/master/core/pom.xml#L197 > > -- > Andrea Cosentino? > ---------------------------------- > Apache Camel PMC Chair > Apache Karaf Committer > Apache Servicemix PMC Member > Email: ancosen1985 at yahoo.com > Twitter: @oscerd2 > Github: oscerd > > > > > > > On Tuesday, June 26, 2018, 10:39:12 AM GMT+2, Galder Zamarreno wrote: > > > > > > I'm not sure. There has been some changes in that area recently. I'll let others comment. > > AFAIK, the ZIP available from Maven is not the full server, but a base server which on startup downloads dependencies. > > In the mean time, the ZIP file should be available from here which is really the full server with all the dependencies. > http://downloads.jboss.org/infinispan/9.3.0.Final/infinispan-server-9.3.0.Final.zip > > Cheers > Galder > > On Tue, Jun 26, 2018 at 9:38 AM Andrea Cosentino wrote: >> Looking at Maven central >> >> http://repo2.maven.org/maven2/org/infinispan/server/infinispan-server/9.3.0.Final/ >> >> It looks like the Infinispan-server zip is missing. >> >> Is this wanted? >> >> Thanks. >> >> -- >> Andrea Cosentino? >> ---------------------------------- >> Apache Camel PMC Chair >> Apache Karaf Committer >> Apache Servicemix PMC Member >> Email: ancosen1985 at yahoo.com >> Twitter: @oscerd2 >> Github: oscerd >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From jonathan.halliday at redhat.com Tue Jun 26 05:20:33 2018 From: jonathan.halliday at redhat.com (Jonathan Halliday) Date: Tue, 26 Jun 2018 10:20:33 +0100 Subject: [infinispan-dev] Upgrade to JTA 1.2 API? In-Reply-To: References: Message-ID: <0c5635a7-f566-e307-a28c-73c69c8840ce@redhat.com> 1.2 added the @Transactional and @TransactionScoped annotations, but did not change the existing interfaces. It's a low risk upgrade. Jonathan. On 26/06/18 10:08, Sanne Grinovero wrote: > If I knew I would have opened a JIRA and sent a PR ;) > > The most important point IMO is that other projects such as WildFly and > Hibernate are using the 1.2 version since a long time already so > Infinispan is effectively testing a version which is not the one we run > it with, thus might hide some issue. > > > On Tue, 26 Jun 2018, 09:56 Galder Zamarreno, > wrote: > > Are the backwards compatible? Any gotchas we should be aware of? > > On Mon, Jun 25, 2018 at 10:07 PM Sanne Grinovero > > wrote: > > Would it be possible to upgrade > ?- > org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final > to > ?- > org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final > > ? > > Thanks! > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From sanne at infinispan.org Tue Jun 26 06:59:43 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Jun 2018 11:59:43 +0100 Subject: [infinispan-dev] Upgrade to JTA 1.2 API? In-Reply-To: <0c5635a7-f566-e307-a28c-73c69c8840ce@redhat.com> References: <0c5635a7-f566-e307-a28c-73c69c8840ce@redhat.com> Message-ID: On Tue, 26 Jun 2018 at 10:20, Jonathan Halliday wrote: > > > 1.2 added the @Transactional and @TransactionScoped annotations, but did > not change the existing interfaces. It's a low risk upgrade. Thanks Jonathan! - https://issues.jboss.org/browse/ISPN-9324 > > Jonathan. > > On 26/06/18 10:08, Sanne Grinovero wrote: > > If I knew I would have opened a JIRA and sent a PR ;) > > > > The most important point IMO is that other projects such as WildFly and > > Hibernate are using the 1.2 version since a long time already so > > Infinispan is effectively testing a version which is not the one we run > > it with, thus might hide some issue. > > > > > > On Tue, 26 Jun 2018, 09:56 Galder Zamarreno, > > wrote: > > > > Are the backwards compatible? Any gotchas we should be aware of? > > > > On Mon, Jun 25, 2018 at 10:07 PM Sanne Grinovero > > > wrote: > > > > Would it be possible to upgrade > > - > > org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final > > to > > - > > org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final > > > > ? > > > > Thanks! > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander From sanne at infinispan.org Tue Jun 26 07:45:52 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Jun 2018 12:45:52 +0100 Subject: [infinispan-dev] Build broken Message-ID: Radim, Dan, your last commit 58aa3b2185 broke master. Please fix or revert ? Thanks From sanne at infinispan.org Tue Jun 26 08:06:14 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Jun 2018 13:06:14 +0100 Subject: [infinispan-dev] Build broken In-Reply-To: References: Message-ID: Dan fixed it via PR #6094 Thanks! On Tue, 26 Jun 2018 at 12:45, Sanne Grinovero wrote: > > Radim, Dan, your last commit 58aa3b2185 broke master. > > Please fix or revert ? > > Thanks