> The biggest problem I see in our client/server architecture is the ability to quickly deliver features/APIs across multiple language clients. Both Vittorio and I have seen how long it takes to implement all the different features available in Java client and port them to Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to improve on that by having some of that work done for us. Granted, not all of it will be done, but it should give us some good foundations on which to build.
To expand on it a bit further: the reason it takes us longer to get different features in is because each client implements its own network layer, parses the protocol and does type transformations (between byte[] and whatever the client expects).
IMO, the most costly things there are getting the network layer right (from experience with Node.js, it has taken a while to do so) and parsing work (not only parsing itself, but doing it in a efficient way). Network layer also includes load balancing, failover, cluster failover...etc.
From past experience, transforming from byte[] to what the client expects has never really been very problematic for me. What's been difficult here is coming up with encoding architecture that Gustavo lead, whose aim was to improve on the initial compatibility mode. But, with that now clear, understood and proven to solve our issues, the rest in this area should be fairly straightforward IMO.
Type transformation, once done, is a constant. As we add more Hot Rod operations, it's mostly the parsing that starts to become more work. Network can also become more work if instead of RPC commands you start supporting streams based commands.
gRPC solves the network (FYI: with key as HTTP header and SubchannelPicker you can do hash-aware routing) and parsing for us. I don't see the need for it to solve our type transformations for us. If it does it, great, but does it support our compatibility requirements? (I had already told Vittorio to check Gustavo on this). Type transformation is a lower prio for me, network and parsing are more important.
Hope this clarifies better my POV.
Cheers
On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote:
Thanks Adrian,
of course there's a marshalling work under the cover and that
is reflected into the generated code (specially the accessor
methods generated from the oneof clause).
My opinion is that on the client side this could be
accepted, as long as the API are well defined and documented:
application developer can build an adhoc decorator on the top
if needed. The alternative to this is to develop a protostream
equivalent for each supported language and it doesn't seem
really feasible to me.
On the server side (java only) the situation is different:
protobuf is optimized for streaming not for storing so
probably a Protostream layer is needed.
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev