Proposal - encrypted cache
by Sebastian Laskawiec
Hey!
A while ago I stumbled upon [1]. The article talks about encrypting data
before they reach the server, so that the server doesn't know how to
decrypt it. This makes the data more secure.
The idea is definitely not new and I have been asked about something
similar several times during local JUGs meetups (in my area there are lots
of payments organizations who might be interested in this).
Of course, this can be easily done inside an app, so that it encrypts the
data and passes a byte array to the Hot Rod Client. I'm just thinking about
making it a bit easier and adding a default encryption/decryption mechanism
to the Hot Rod client.
What do you think? Does it make sense?
Thanks
Sebastian
[1] https://eprint.iacr.org/2016/920.pdf
6 years, 4 months
Hot Rod secured by default
by Tristan Tarrant
Dear all,
after a mini chat on IRC, I wanted to bring this to everybody's attention.
We should make the Hot Rod endpoint require authentication in the
out-of-the-box configuration.
The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL
mechanism against the ApplicationRealm and require users to run the
add-user script.
This would achieve two goals:
- secure out-of-the-box configuration, which is always a good idea
- access to the "protected" schema and script caches which is prevented
when not on loopback on non-authenticated endpoints.
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
6 years, 8 months
Using load balancers for Infinispan in Kubernetes
by Sebastian Laskawiec
Hey guys!
Over past few weeks I've been working on accessing Infinispan cluster
deployed inside Kubernetes from the outside world. The POC diagram looks
like the following:
[image: pasted1]
As a reminder, the easiest (though not the most effective) way to do it is
to expose a load balancer Service (or a Node Port Service) and access it
using a client with basic intelligence (so that it doesn't try to update
server list based on topology information). As you might expect, this won't
give you much performance but at least you could access the cluster.
Another approach is to use TLS/SNI but again, the performance would be even
worse.
During the research I tried to answer this problem and created "External IP
Controller" [1] (and corresponding Pull Request for mapping
internal/external addresses [2]). The main idea is to create a controller
deployed inside Kubernetes which will create (and destroy if not needed) a
load balancer per Infinispan Pod. Additionally the controller exposes
mapping between internal and external addresses which allows the client to
properly update server list as well as consistent hash information. A full
working example is located here [3].
The biggest question is whether it's worth it? The short answer is yes.
Here are some benchmark results of performing 10k puts and 10k puts&gets
(please take them with a big grain of salt, I didn't optimize any server
settings):
- Benchmark app deployed inside Kuberenetes and using internal addresses
(baseline):
- 10k puts: 674.244 ± 16.654
- 10k puts&gets: 1288.437 ± 136.207
- Benchamrking app deployed in a VM outside of Kubernetes with basic
intelligence:
- *10k puts: 1465.567 ± 176.349*
- *10k puts&gets: 2684.984 ± 114.993*
- Benchamrking app deployed in a VM outside of Kubernetes with address
mapping and topology aware hashing:
- *10k puts: 1052.891 ± 31.218*
- *10k puts&gets: 2465.586 ± 85.034*
Note that benchmarking Infinispan from a VM might be very misleading since
it depends on data center configuration. Benchmarks above definitely
contain some delay between Google Compute Engine VM and a Kubernetes
cluster deployed in Google Container Engine. How big is the delay? Hard to
tell. What counts is the difference between client using basic intelligence
and topology aware intelligence. And as you can see it's not that small.
So the bottom line - if you can, deploy your application along with
Infinispan cluster inside Kubernetes. That's the fastest configuration
since only iptables are involved. Otherwise use a load balancer per pod
with External IP Controller. If you don't care about performance, just use
basic client intelligence and expose everything using single load balancer.
Thanks,
Sebastian
[1] https://github.com/slaskawi/external-ip-proxy
[2] https://github.com/infinispan/infinispan/pull/5164
[3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
7 years, 4 months
Proposal for moving Hibernate 2l provider to Infinispan
by Galder Zamarreño
Hi all,
Given all the previous discussions we've had on this list [1] [2], seems like there's a majority of opinions towards moving Infinispan Hibernate 2LC cache provider to the Infinispan repo.
Although we could put it in a completely separate repo, given its importance, I think we should keep it in the main Infinispan repo.
With this in mind, I wanted to propose the following:
1. Move the code Hibernate repository and bring it to Infinispan master and 9.0.x branches. We'd need to introduce the module in the 9.0.x branch so that 9.0.x users are not left out.
2. Create a root directory called `hibernate-orm` within Infinispan main repo. Within it, we'd keep 1 or more cache provider modules based on major Hibernate versions.
3. What should be the artifact name? Should it be 'hibernate-infinispan' like it is today? The difference with the existing cache provider would be the groupId. Or some other artifact id?
4. Should the main artifact contain the hibernate major version it belongs to? E.g. assuming we take 'hibernate-infinispan', should it be like that, or should it instead be 'hibernate5-infinispan'? This is where it'd be interesting to hear about our past Lucene directory or Query integration experience.
5. A thing to consider also is whether to maintain same package naming. We're currently using 'org.hibernate.cache.infinispan.*'. From a compatibility sense, it'd help to keep same package since users reference region factory fully qualified class names. We'd also continue to be sole owners of 'org.hibernate.cache.infinispan.*'. However, I dunno whether having 'org.hibernate...' package name within Infinispan repo would create other issues?
6. Testing wise, the cache provider is currently tested one test at the time, using JUnit. The testsuite already runs fast enough and I'd prefer not to change anything in this area right now. Is that Ok? Or is there any desire to move it to TestNG?
Thoughts? Am I forgetting something?
Cheers,
[1] http://lists.jboss.org/pipermail/infinispan-dev/2017-February/017173.html
[2] http://lists.jboss.org/pipermail/infinispan-dev/2017-May/017546.html
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 5 months
Allocation costs of TypeConverterDelegatingAdvancedCache
by Sanne Grinovero
Hi all,
I've been running some benchmarks and for the fist time playing with
Infinispan 9+, so please bear with me as I might shoot some dumb
questions to the list in the following days.
The need for TypeConverterDelegatingAdvancedCache to wrap most
operations - especially "convertKeys" - is highlighet as one of the
high allocators in my Search-centric use case.
I'm wondering:
A - Could this implementation be improved?
B - Could I bypass / disable it? Not sure why it's there.
Thanks,
Sanne
7 years, 5 months
HotRod client TCK
by Martin Gencur
Hello all,
we have been working on https://issues.jboss.org/browse/ISPN-7120.
Anna has finished the first step from the JIRA - collecting information
about tests in the Java HotRod client test suite (including server
integration tests) and it is now prepared for wider review.
She created a spreadsheet [1]. The spread sheet includes for each Java
test its name, the suggested target package in the TCK, whether to
include it in the TCK or not, and some other notes. The suggested
package also poses grouping for the tests (e.g. tck.query, tck.near,
tck.xsite, ...)
Let me add that right now the goal is not to create a true TCK [2]. The
goal is to make sure that all implementations of the HotRod protocol
have sufficient test coverage and possibly the same server side of the
client-server test (including the server version and configuration).
What are the next step?
* Please review the list (at least a quick look) and see if some of the
tests which are NOT suggested for the TCK should be added or vice versa.
* I suppose the next step would then be to check other implementations
(C#, C++, NodeJS, ..) and identify tests which are missing there (there
will surely be some).
* Gradually implement the missing tests in the other implementations
Note: Here we should ensure that the server is configured in the same
way for all implementations. One way to achieve this (thanks Anna for
suggestion!) is to have a shell/batch scripts for CLI which would be
executed before the tests. This can probably be done for all impls. and
both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes
useless because it uses Creaper (Java) and we need a language-neutral
solution for configuring the server.
Some other notes:
* there are some duplicated tests in hotrod-client and server
integration test suites, in this case it probably makes sense to only
include in the TCK the server integration test
* tests from the hotrod-client module which are supposed to be part of
the TCK should be copied to the server integration test suite one day
(possibly later)
Please let us know what you think.
Thanks,
Martin
[1]
https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZW...
[2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit
[3] https://github.com/infinispan/infinispan/pull/5012
7 years, 5 months
IRC chat: HB + I9
by Galder Zamarreño
I'm on the move, not sure if Paul/Radim saw my replies:
<pferraro> galderz, rvansa: Hey guys - is there a plan for Hibernate &
ISPN 9?
<rvansa> pferraro: Galder has been working on that
<rvansa> pferraro: though I haven't seen any results but a list of
stuff that needs to be changed
<pferraro> galderz: which Hibernate branch are you targeting?
<rvansa> pferraro: 5.2, but there are minute differences between 5.x
in terms of the parts that need love to get Infinispan 9 support
*** Mode change: +v vblagoje on #infinispan by ChanServ
(ChanServ@services.)
<pferraro> rvansa: are you suggesting that 5.0 or 5.1 branches will be
adapted to additionally support infinispan 9? how is that
possible?
> pferraro: i'm working on it as we speak...
> pferraro: down to 16 failuresd
> pferraro: i started a couple of months ago, but had talks/demos to
prepare
> pferraro: i've got back to working on it this week
...
> pferraro: rvansa
> rvansa: minute differences my ass ;p
> pferraro: did you see my replies?
> i got disconnected while replying...
<pferraro> hmm - no - I didn't
<pferraro> galderz: ^
> pferraro: so, working on the HB + I9 integration as we speak
> pferraro: i started a couple of months back but had talks/demos to
prepare and had to put that aside
> pferraro: i'm down to 16 failures
> pferraro: serious refactoring required of the integration to get it
to compile and the tests to pass
> pferraro: need to switch to async interceptor stack in 2lc
integration and get all the subtle changes right
> pferraro: it's a painstaking job basically
> pferraro: i'm working on
https://github.com/galderz/hibernate-orm/tree/t_i9x_v2
> pferraro: i can't remember where i branched off, but it's a branch
that steve had since master was focused on 5.x
> pferraro: i've no idea when/where we'll integrate this, but one
thing is for sure: it's nowhere near backwards compatible
> actually, fixed one this morning, so down to 15 failures
> pferraro: any suggestions/wishes?
> is anyone out there? ;)
Cheers,
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 5 months