Proposal - encrypted cache
by Sebastian Laskawiec
Hey!
A while ago I stumbled upon [1]. The article talks about encrypting data
before they reach the server, so that the server doesn't know how to
decrypt it. This makes the data more secure.
The idea is definitely not new and I have been asked about something
similar several times during local JUGs meetups (in my area there are lots
of payments organizations who might be interested in this).
Of course, this can be easily done inside an app, so that it encrypts the
data and passes a byte array to the Hot Rod Client. I'm just thinking about
making it a bit easier and adding a default encryption/decryption mechanism
to the Hot Rod client.
What do you think? Does it make sense?
Thanks
Sebastian
[1] https://eprint.iacr.org/2016/920.pdf
6 years, 4 months
Hot Rod secured by default
by Tristan Tarrant
Dear all,
after a mini chat on IRC, I wanted to bring this to everybody's attention.
We should make the Hot Rod endpoint require authentication in the
out-of-the-box configuration.
The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL
mechanism against the ApplicationRealm and require users to run the
add-user script.
This would achieve two goals:
- secure out-of-the-box configuration, which is always a good idea
- access to the "protected" schema and script caches which is prevented
when not on loopback on non-authenticated endpoints.
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
6 years, 8 months
Simplest way to check the validity of connection to Remote Cache
by Ramesh Reddy
Hi,
Is there call I can make on the cache API like ping to check the validity of the remote connection? In OpenShift JDV is having issues with keeping the connections fresh to JDG when node count goes to zero and comes back up.
Thank you.
Ramesh..
7 years, 6 months
Executing server tasks that contain POJOs
by Galder Zamarreño
Hi all,
For a demo I'm giving next week, I'd like to show how to use distributed streams via a remote server task. All server tasks that we have in testsuite rely on primitives but in my case I wanted to use POJOs.
To do that, I needed to get compatibility mode working in such way that those POJOs could be unmarshalled for the server task. Since in another demo I'm showing Protostream based POJOs, I thought I'd try to use that as mechanism to unmarshall POJOs server side.
We have a test for such scenario [1], but the reality (running on a proper server) is anything that simple. Here's a list of things I've found out while creating a WordCount example that relies on a POJO:
1. Out of the box, it's impossible to set compatibility marshaller to org.infinispan.query.remote.CompatibilityProtoStreamMarshaller [1] because "org.infinispan.main" classloader can't access that class. I worked around that by tweaking the module.xml to have an optional dependency to "org.infinispan.remote-query.server" module.
2. After doing that, I had to register the protofile and associated classes remotely in the server. Again, there's no out of the box mechanism for that, so I created a remote server task that would do that [3].
3. Finally, with all that in place, I was able to complete the WordCount test [4] with a final caveat: the return of the word count, and words protofile registration, tasks return objects that are not marshalled by the compatibility marshaller, so I had to make sure that the remote cache manager used for those tasks uses the default marshaller.
Clearly we need to improve on this, and we have plans to address these issues (with new upcoming transcoding capabilities), but I thought it'd be worth mentioning the problems found in case anyone else encounters them before transcoding is in place.
Cheers,
[1] https://github.com/galderz/datagrid-patterns/blob/master/server-config/do...
[2] https://github.com/galderz/datagrid-patterns/blob/master/server-config/or...
[3] https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream...
[4] https://github.com/galderz/datagrid-patterns/blob/master/analytics-stream...
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 7 months
Infinispan Designs repository
by Tristan Tarrant
As was pointed out by Sebastian, GitHub's wiki doesn't really take
advantage of the wonderful review tools that are instead available for
traditional repos.
Instead of creating noise in the main infinispan repo, I have setup a
dedicated repo for design ideas. Let's fill it up !
https://github.com/infinispan/infinispan-designs
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
7 years, 7 months
Strategy to adopting Optional in APIs
by Radim Vansa
Hi,
I was wondering what's the common attitude towards using Optional in
APIs, and what naming pattern should we use. As an example, I dislike
calling
if (entry.getMetadata() != null && entry.getMetadata().version() != null) {
foo.use(entry.getMetadata().version())
}
where I could just do
entry.metadata().flatMap(Metadata::optionalVersion).ifPresent(foo::use)
Here I have proposed metadata() method returning Optional<Metadata>
(regular getter method is called getMetadata()) and annoying
optionalVersion() as version() is the regular getter.
Shall we adopt some common stance (use/don't use/use at developer's
discretion) and naming conventions? Is it acceptable to start adding
default Optional<Foo> foo() { Optional.ofNullable(getFoo()); }
whenever we feel the urge to chain Optionals?
Radim
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
7 years, 7 months
Infinispan 9.0 Final
by Tristan Tarrant
Dear all,
we are proud to announce Infinispan 9.0 Final.
This release includes many new features and improvements:
- much improved performance in all scenarios
- off-heap data container, to avoid GC pauses
- Ickle, a new query language based on JP-QL with full-text capabilities
- multi-tenancy with SNI support for the server
- vastly improved cloud and container integrations
Read more about it in our announcement [1]
As usual you can find all the downloads, documentation and community
links on our website: http://infinispan.org
Enjoy !
The Infinispan Team
[1] http://blog.infinispan.org/2017/03/infinispan-9.html
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
7 years, 7 months
Branching proposal
by Sebastian Laskawiec
Hey!
We are about to start working on 9.1.x and 9.2.y branches so I would like
to propose alternative merging strategy.
Our current workflow looks like this:
X - new commit
X` - cherry pick to maintenance branch
--+-------------------+-------X----- master
| \------X`---- 9.2.x
\---------------------------X``--- 9.1.x
Each commit needs to be reviewed in master branch and backported to the
maintenance branches. From maintenance perspective this is a bit painful,
since in above example we need to get 3 times through PR queue. Also it's
worth to mention that X is not X` nor X``. Cherry-picking creates a copy of
a commit. This makes some useful tricks (like git tag --contains <sha1>) a
bit harder to use. Finally, this approach allows the codebase to diverge
from maintenance branches very fast (someone might just forget to backport
some of the refactoring stuff).
The proposal:
X, Y - new commits
/ - merge commits
--+---------+------/----/--- master
| \----/---Y/---- 9.2.x
\-------------X/---------- 9.1.x
With the proposal, a developer should always implement a given feature in
the lowest possible maintenance branch. Then we will run a set of merges
from 9.1.x into 9.2.x and finally into master. The biggest advantage of
this approach is that given functionality (identified by a commit) will
have the same SHA1 for all branches. This will allow all tools like
(mentioned before) `git tag --contains <sha1>` to work. There are also some
further implications of this approach:
- Merging commits should be performed very often (even automatically in
the night (if merged without any problems)).
- After releasing each maintenance release, someone will need to do a
merge with strategy `ours` (`git merge -s ours upstream/9.2.x`). This way
we will not have to solve version conflicts in poms.
- Since there is no nice way to rebase a merge commit, they should be
pushed directly into the master branch (without review, without CI). After
the merge, HEAD will change and CI will automatically pick the build.
Remember, merges should be done very often. So I assume there won't be any
problems most of the times.
- Finally, with this approach the code diverges slight slower (at least
from my experience). Mainly because we don't need to remember to
cherry-pick individual commits. They are automatically "taken" by a merge.
>From my past experience, this strategy works pretty nice and can be almost
fully automated. It significantly lowers the maintenance pain around
cherry-picks. However there is nothing for free, and we would need to get
used to pushing merged directly into master (which is fine to me but some
of you might not like it).
Thanks,
Sebastian
7 years, 7 months
Stream operations under lock
by William Burns
Some users have expressed the need to have some sort of forEach operation
that is performed where the Consumer is called while holding the lock for
the given key and subsequently released after the Consumer operation
completes.
Due to the nature of how streams work with retries and performing the
operation on the primary owner, this works out quite well with forEach to
be done in an efficient way.
The problem is that this only really works well with non tx and pessimistic
tx. This obviously leaves out optimistic tx, which at first I was a little
worried about. But after thinking about it more, this prelocking and
optimistic tx don't really fit that well together anyways. So I am thinking
whenever this operation is performed it would throw an exception not
letting the user use this feature in optimistic transactions.
Another question is what does the API for this look like. I was debating
between 3 options myself:
1. AdvancedCache.forEachWithLock(BiConsumer<Cache, CacheEntry<K, V>>
consumer)
This require the least amount of changes, however the user can't customize
certain parameters that CacheStream currently provides (listed below - big
one being filterKeys).
2. CacheStream.forEachWithLock(BiConsumer<Cache, CacheEntry<K, V>> consumer)
This method would only be allowed to be invoked on the Stream if no other
intermediate operations were invoked, otherwise an exception would be
thrown. This still gives us access to all of the CacheStream methods that
aren't on the Stream interface (ie. sequentialDistribution,
parallelDistribution, parallel, sequential, filterKeys, filterKeySegments,
distributedBatchSize, disableRehashAware, timeout).
3. LockedStream<CacheEntry<K, V>> AdvancedCache.lockedStream()
This requires the most changes, however the API would be the most explicit.
In this case the LockedStream would only have the methods on it that are
able to be invoked as noted above and forEach.
I personally feel that #3 might be the cleanest, but obviously requires
adding more classes. Let me know what you guys think and if you think the
optimistic exclusion is acceptable.
Thanks,
- Will
7 years, 7 months