[infinispan-dev] Infinispan client/server architecture based on gRPC

Adrian Nistor anistor at redhat.com
Wed May 30 08:26:51 EDT 2018


Yest, the client needs that hash but that does not necessarily mean it 
has to compute it itself.
The hash should be applied to the storage format which might be 
different from the format the client sees. So hash computation could be 
done on the server, just a thought.

On 05/30/2018 02:47 PM, Radim Vansa wrote:
> On 05/30/2018 12:46 PM, Adrian Nistor wrote:
>> Thanks for clarifying this Galder.
>> Yes, the network layer is indeed the culprit and the purpose of this
>> experiment.
>>
>> What is the approach you envision regarding the IDL? Should we strive
>> for a pure IDL definition of the service? That could be an interesting
>> approach that would make it possible for a third party to generate
>> their own infinispan grpc client in any new language that we do not
>> already offer support, just based on the IDL. And maybe using a
>> different grpc implementation if they do not find suitable the one
>> from google.
>>
>> I was not suggesting we should do type transformation or anything on
>> the client side that would require an extra layer of code on top of
>> what grpc generates for the client, so maybe a pure IDL based service
>> definition would indeed be possible, without extra helpers. No type
>> transformation, just type information. Exposing the type info that
>> comes from the server would be enough, a lot better than dumbing
>> everything down to a byte[].
> I may be wrong but key transformation on client is necessary for correct
> hash-aware routing, isn't it? We need to get byte array for each key and
> apply murmur hash there (IIUC even when we use protobuf as the storage
> format, segment is based on the raw protobuf bytes, right?).
>
> Radim
>
>> Adrian
>>
>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote:
>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor <anistor at redhat.com
>>> <mailto:anistor at redhat.com>> wrote:
>>>
>>>      Vittorio, a few remarks regarding your statement "...The
>>>      alternative to this is to develop a protostream equivalent for
>>>      each supported language and it doesn't seem really feasible to me."
>>>
>>>      No way! That's a big misunderstanding. We do not need to
>>>      re-implement the protostream library in C/C++/C# or any new
>>>      supported language.
>>>      Protostream is just for Java and it is compatible with Google's
>>>      protobuf lib we already use in the other clients. We can continue
>>>      using Google's protobuf lib for these clients, with or without gRPC.
>>>      Protostream does not handle protobuf services as gRPC does, but
>>>      we can add support for that with little effort.
>>>
>>>      The real problem here is if we want to replace our hot rod
>>>      invocation protocol with gRPC to save on the effort of
>>>      implementing and maintaining hot rod in all those clients. I
>>>      wonder why the obvious question is being avoided in this thread.
>>>
>>>
>>> ^ It is not being avoided. I stated it quite clearly when I replied
>>> but maybe not with enough detail. So, I said:
>>>
>>>>   The biggest problem I see in our client/server architecture is the
>>> ability to quickly deliver features/APIs across multiple language
>>> clients. Both Vittorio and I have seen how long it takes to implement
>>> all the different features available in Java client and port them to
>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to
>>> improve on that by having some of that work done for us. Granted, not
>>> all of it will be done, but it should give us some good foundations
>>> on which to build.
>>>
>>> To expand on it a bit further: the reason it takes us longer to get
>>> different features in is because each client implements its own
>>> network layer, parses the protocol and does type transformations
>>> (between byte[] and whatever the client expects).
>>>
>>> IMO, the most costly things there are getting the network layer right
>>> (from experience with Node.js, it has taken a while to do so) and
>>> parsing work (not only parsing itself, but doing it in a efficient
>>> way). Network layer also includes load balancing, failover, cluster
>>> failover...etc.
>>>
>>>  From past experience, transforming from byte[] to what the client
>>> expects has never really been very problematic for me. What's been
>>> difficult here is coming up with encoding architecture that Gustavo
>>> lead, whose aim was to improve on the initial compatibility mode.
>>> But, with that now clear, understood and proven to solve our issues,
>>> the rest in this area should be fairly straightforward IMO.
>>>
>>> Type transformation, once done, is a constant. As we add more Hot Rod
>>> operations, it's mostly the parsing that starts to become more work.
>>> Network can also become more work if instead of RPC commands you
>>> start supporting streams based commands.
>>>
>>> gRPC solves the network (FYI: with key as HTTP header and
>>> SubchannelPicker you can do hash-aware routing) and parsing for us. I
>>> don't see the need for it to solve our type transformations for us.
>>> If it does it, great, but does it support our compatibility
>>> requirements? (I had already told Vittorio to check Gustavo on this).
>>> Type transformation is a lower prio for me, network and parsing are
>>> more important.
>>>
>>> Hope this clarifies better my POV.
>>>
>>> Cheers
>>>
>>>
>>>
>>>      Adrian
>>>
>>>
>>>      On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote:
>>>>      Thanks Adrian,
>>>>
>>>>      of course there's a marshalling work under the cover and that is
>>>>      reflected into the generated code (specially the accessor
>>>>      methods generated from the oneof clause).
>>>>
>>>>      My opinion is that on the client side this could be accepted, as
>>>>      long as the API are well defined and documented: application
>>>>      developer can build an adhoc decorator on the top if needed. The
>>>>      alternative to this is to develop a protostream equivalent for
>>>>      each supported language and it doesn't seem really feasible to me.
>>>>
>>>>      On the server side (java only) the situation is different:
>>>>      protobuf is optimized for streaming not for storing so probably
>>>>      a Protostream layer is needed.
>>>>
>>>>      On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor
>>>>      <anistor at redhat.com <mailto:anistor at redhat.com>> wrote:
>>>>
>>>>          Hi Vittorio,
>>>>          thanks for exploring gRPC. It seems like a very elegant
>>>>          solution for exposing services. I'll have a look at your PoC
>>>>          soon.
>>>>
>>>>          I feel there are some remarks that need to be made regarding
>>>>          gRPC. gRPC is just some nice cheesy topping on top of
>>>>          protobuf. Google's implementation of protobuf, to be more
>>>>          precise.
>>>>          It does not need handwritten marshallers, but the 'No need
>>>>          for marshaller' does not accurately describe it. Marshallers
>>>>          are needed and are generated under the cover by the library
>>>>          and so are the data objects and you are unfortunately forced
>>>>          to use them. That's both the good news and the bad news:)
>>>>          The whole thing looks very promising and friendly for many
>>>>          uses cases, especially for demos and PoCs :))). Nobody wants
>>>>          to write those marshallers. But it starts to become a
>>>>          nuisance if you want to use your own data objects.
>>>>          There is also the ugliness and excessive memory footprint of
>>>>          the generated code, which is the reason Infinispan did not
>>>>          adopt the protobuf-java library although it did adopt
>>>>          protobuf as an encoding format.
>>>>          The Protostream library was created as an alternative
>>>>          implementation to solve the aforementioned problems with the
>>>>          generated code. It solves this by letting the user provide
>>>>          their own data objects. And for the marshallers it gives you
>>>>          two options: a) write the marshaller yourself (hated), b)
>>>>          annotated your data objects and the marshaller gets
>>>>          generated (loved). Protostream does not currently support
>>>>          service definitions right now but this is something I
>>>>          started to investigate recently after Galder asked me if I
>>>>          think it's doable. I think I'll only find out after I do it:)
>>>>
>>>>          Adrian
>>>>
>>>>
>>>>          On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote:
>>>>>          Hi Infinispan developers,
>>>>>
>>>>>          I'm working on a solution for developers who need to access
>>>>>          Infinispan services  through different programming languages.
>>>>>
>>>>>          The focus is not on developing a full featured client, but
>>>>>          rather discover the value and the limits of this approach.
>>>>>
>>>>>          - is it possible to automatically generate useful clients
>>>>>          in different languages?
>>>>>          - can that clients interoperate on the same cache with the
>>>>>          same data types?
>>>>>
>>>>>          I came out with a small prototype that I would like to
>>>>>          submit to you and on which I would like to gather your
>>>>>          impressions.
>>>>>
>>>>>           You can found the project here [1]: is a gRPC-based
>>>>>          client/server architecture for Infinispan based on and
>>>>>          EmbeddedCache, with very few features exposed atm.
>>>>>
>>>>>          Currently the project is nothing more than a poc with the
>>>>>          following interesting features:
>>>>>
>>>>>          - client can be generated in all the grpc supported
>>>>>          language: java, go, c++ examples are provided;
>>>>>          - the interface is full typed. No need for marshaller and
>>>>>          clients build in different language can cooperate on the
>>>>>          same cache;
>>>>>
>>>>>          The second item is my preferred one beacuse it frees the
>>>>>          developer from data marshalling.
>>>>>
>>>>>          What do you think about?
>>>>>          Sounds interesting?
>>>>>          Can you see any flaw?
>>>>>
>>>>>          There's also a list of issues for the future [2], basically
>>>>>          I would like to investigate these questions:
>>>>>          How far this architecture can go?
>>>>>          Topology, events, queries... how many of the Infinispan
>>>>>          features can be fit in a grpc architecture?
>>>>>
>>>>>          Thank you
>>>>>          Vittorio
>>>>>
>>>>>          [1] https://github.com/rigazilla/ispn-grpc
>>>>>          [2] https://github.com/rigazilla/ispn-grpc/issues
>>>>>
>>>>>          --
>>>>>
>>>>>          Vittorio Rigamonti
>>>>>
>>>>>          Senior Software Engineer
>>>>>
>>>>>          Red Hat
>>>>>
>>>>>          <https://www.redhat.com>
>>>>>
>>>>>          Milan, Italy
>>>>>
>>>>>          vrigamon at redhat.com <mailto:vrigamon at redhat.com>
>>>>>
>>>>>          irc: rigazilla
>>>>>
>>>>>          <https://red.ht/sig>
>>>>>
>>>>>
>>>>>          _______________________________________________
>>>>>          infinispan-dev mailing list
>>>>>          infinispan-dev at lists.jboss.org
>>>>>          <mailto:infinispan-dev at lists.jboss.org>
>>>>>          https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>
>>>>
>>>>
>>>>      --
>>>>
>>>>      Vittorio Rigamonti
>>>>
>>>>      Senior Software Engineer
>>>>
>>>>      Red Hat
>>>>
>>>>      <https://www.redhat.com>
>>>>
>>>>      Milan, Italy
>>>>
>>>>      vrigamon at redhat.com <mailto:vrigamon at redhat.com>
>>>>
>>>>      irc: rigazilla
>>>>
>>>>      <https://red.ht/sig>
>>>
>>>      _______________________________________________
>>>      infinispan-dev mailing list
>>>      infinispan-dev at lists.jboss.org
>>>      <mailto:infinispan-dev at lists.jboss.org>
>>>      https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>



More information about the infinispan-dev mailing list