Proposal - encrypted cache
by Sebastian Laskawiec
Hey!
A while ago I stumbled upon [1]. The article talks about encrypting data
before they reach the server, so that the server doesn't know how to
decrypt it. This makes the data more secure.
The idea is definitely not new and I have been asked about something
similar several times during local JUGs meetups (in my area there are lots
of payments organizations who might be interested in this).
Of course, this can be easily done inside an app, so that it encrypts the
data and passes a byte array to the Hot Rod Client. I'm just thinking about
making it a bit easier and adding a default encryption/decryption mechanism
to the Hot Rod client.
What do you think? Does it make sense?
Thanks
Sebastian
[1] https://eprint.iacr.org/2016/920.pdf
6 years, 5 months
Build broken
by Sanne Grinovero
Radim, Dan, your last commit 58aa3b2185 broke master.
Please fix or revert ?
Thanks
6 years, 6 months
Upgrade to JTA 1.2 API?
by Sanne Grinovero
Would it be possible to upgrade
- org.jboss.spec.javax.transaction:jboss-transaction-api_1.1_spec:jar:1.0.1.Final
to
- org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec:jar:1.0.1.Final
?
Thanks!
6 years, 6 months
Re: [infinispan-dev] ISPN-6478
by Radim Vansa
Adding -dev list as we're discussing a new feature.
On 06/25/2018 05:20 PM, Galder Zamarreno wrote:
> The question is more about the potential problem of filling up the
> event queue:
>
> First, which thread should be the one to queue up the event? Netty
> thread or thread pool?
>
> Second, if the queue is full, the thread hangs waiting for it to
> become non-full. Should this be the case? If this is the Netty thread,
> it's probably quite bad. If it's from the remote thread pool, maybe
> not as bad but still bad.
Agreed, blocking is bad, always. I don't believe that we should
propagate the backpressure from slow clients (receiving events) to fast
clients doing the updates. The only thing about slow clients we should
do is reporting that we have some trouble with them (monitoring, logs...).
>
> Regardless, If the event queue is full, we need to do something
> different. Dropping the connection doesn't sound too bad. Our
> guarantees are at least once, so we can get it again on reconnect with
> includeCurrentState.
>
> We could signal such an event with a different event, just like we do
> with ClientCacheFailoverEvent, so that with this new event, you know
> you've been disconnected because events are not being consumed fast
> enough?
Yep, but we can't start sending the event to newer clients even if we
increase the protocol version; we could send such event only if the
client is explicitly listening for that and in the other case we should
still drop the connection.
Radim
>
> Cheers
>
> On Mon, Jun 25, 2018 at 2:47 PM Radim Vansa <rvansa(a)redhat.com
> <mailto:rvansa@redhat.com>> wrote:
>
> You mean that the writes are executed by the Netty thread instead of
> remote-thread-pool thread? Ok, I think that the issue is still valid.
>
> Since there's no event truncation, when the queue is over limit
> all we
> can do is probably dropping the connection. The client will reconnect
> (if it's still alive) and it will ask for current state (btw. will
> the
> user know that it has lost some events when it reconnects on a
> includeCurrentState=false listener? I don't think so... we might just
> drop the events anyway then). The current state will be sent in
> another
> threadpool, and as an improvement we could somehow reject the
> listener
> registration if we're over limit on other connections.
>
> Radim
>
> On 06/25/2018 02:29 PM, William Burns wrote:
> > Hrmm, I don't know for sure. Radim changed how the events all
> work now, so it might be fixed.
> >
> > - Will
> >
> > ----- Original Message -----
> >> From: "Galder Zamarreno" <galder(a)redhat.com
> <mailto:galder@redhat.com>>
> >> To: "William Burns" <wburns(a)redhat.com <mailto:wburns@redhat.com>>
> >> Sent: Tuesday, 19 June, 2018 6:58:57 AM
> >> Subject: ISPN-6478
> >>
> >> Hey,
> >>
> >> Is [1] still an issue for us?
> >>
> >> Cheers,
> >>
> >> [1] https://issues.jboss.org/browse/ISPN-6478
> >>
>
> --
> Radim Vansa <rvansa(a)redhat.com <mailto:rvansa@redhat.com>>
> JBoss Performance Team
>
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
6 years, 6 months
Infinispan client/server architecture based on gRPC
by Vittorio Rigamonti
Hi Infinispan developers,
I'm working on a solution for developers who need to access Infinispan
services through different programming languages.
The focus is not on developing a full featured client, but rather discover
the value and the limits of this approach.
- is it possible to automatically generate useful clients in different
languages?
- can that clients interoperate on the same cache with the same data types?
I came out with a small prototype that I would like to submit to you and on
which I would like to gather your impressions.
You can found the project here [1]: is a gRPC-based client/server
architecture for Infinispan based on and EmbeddedCache, with very few
features exposed atm.
Currently the project is nothing more than a poc with the following
interesting features:
- client can be generated in all the grpc supported language: java, go, c++
examples are provided;
- the interface is full typed. No need for marshaller and clients build in
different language can cooperate on the same cache;
The second item is my preferred one beacuse it frees the developer from
data marshalling.
What do you think about?
Sounds interesting?
Can you see any flaw?
There's also a list of issues for the future [2], basically I would like to
investigate these questions:
How far this architecture can go?
Topology, events, queries... how many of the Infinispan features can be fit
in a grpc architecture?
Thank you
Vittorio
[1] https://github.com/rigazilla/ispn-grpc
[2] https://github.com/rigazilla/ispn-grpc/issues
--
Vittorio Rigamonti
Senior Software Engineer
Red Hat
<https://www.redhat.com>
Milan, Italy
vrigamon(a)redhat.com
irc: rigazilla
<https://red.ht/sig>
6 years, 6 months
weekly zulip meeting
by Vittorio Rigamonti
Hi All,
I'll miss the chat meeting today, sorry.
Here's my status:
last week I sent an email to the infinispan-dev ML to gather opinions about
my project for an ispn client based on grpc.
>From the feedbacks seems that the community is more interested on how grpc
can save us work on the networking layer (topology, retry, security) than
on my work about user data model representation.
So I will probably change my focus and I'll do some more work on that
direction.
Apart from that I'm about to complete my work on counters in C#
(HRCPP-457): finishing the listener implementation and the test suite.
in
I've also released the 9.3.0.CR1 Infinispan version.
This week I'll work on parametrized queries (HRCPP-339) and I also would
like to have the work on HRCPP-454 455 merge so I can build a release for C
clients.
Cheers,
Vittorio
--
Vittorio Rigamonti
Senior Software Engineer
Red Hat
<https://www.redhat.com>
Milan, Italy
vrigamon(a)redhat.com
irc: rigazilla
<https://red.ht/sig>
6 years, 6 months