Proposal - encrypted cache
by Sebastian Laskawiec
Hey!
A while ago I stumbled upon [1]. The article talks about encrypting data
before they reach the server, so that the server doesn't know how to
decrypt it. This makes the data more secure.
The idea is definitely not new and I have been asked about something
similar several times during local JUGs meetups (in my area there are lots
of payments organizations who might be interested in this).
Of course, this can be easily done inside an app, so that it encrypts the
data and passes a byte array to the Hot Rod Client. I'm just thinking about
making it a bit easier and adding a default encryption/decryption mechanism
to the Hot Rod client.
What do you think? Does it make sense?
Thanks
Sebastian
[1] https://eprint.iacr.org/2016/920.pdf
6 years, 4 months
Calling getCache with a template and defined configuration
by William Burns
When working on another project using Infinispan the code being used was a
bit interesting and I don't think our template configuration handling was
expecting it do so in such a way.
Essentially the code defined a template for a distributed cache as well as
some named caches. Then whenever a cache is retrieved it would pass the
given name and always the distributed cache template. Unfortunately with
the way templates work they essentially redefine a cache first so the
actual cache configuration was wiped out. In this example I was able to
get the code to change to using a default cache instead, which is the
behavior that is needed.
The issue though at hand is whether we should allow a user to call getCache
in such a way. My initial thought is to have it throw some sort of
configuration exception when this is invoked. But there are some possible
options.
1. Throw a configuration exception not allowing a user to use a template
with an already defined cache. This has a slight disconnect between
configuration and runtime, since if a user adds a new definition it could
cause runtime issues.
2. Log an error/warning message when this occurs. Is this enough though?
Still could have runtime issues that are possibly undetected.
3. Merge the configurations together applying the template first. This
would be akin to how default cache works currently, but you would get to
define your default template configuration at runtime. This sounded like
the best option to me, but the problem is what if someone calls getCache
using the same cache name but a different template. This could get hairy as
well.
Really thinking about the future, disconnecting the cache definition and
retrieval would be the best option, but we can't do that this late in the
game.
What do you guys think?
- Will
7 years, 7 months
Concerns about the testsuite state
by Sanne Grinovero
Hi all,
I was mentioning on IRC today that I've seen many failures in the past
hours, as I was trying to verify a simple PR.
Tristan suggested to share some failures, so here are the outcomes of
my first attempts to build Infinispan, each time restarting after a
module would fail:
Failed tests:
ClusterListenerDistTest>AbstractClusterListenerNonTxTest.testPrimaryOwnerGoesDownAfterSendingEvent:81
expected [ClusterListenerDistTest-NodeC-54048] but found
[ClusterListenerDistTest-NodeA-38805]
ClusterListenerDistTest.testPrimaryOwnerGoesDownBeforeSendingEvent:78
expected [ClusterListenerDistTest-NodeDR-1944] but found
[ClusterListenerDistTest-NodeDP-23754]
Failed tests:
LocalModeNoPassivationTest>LocalModePassivationTest.testValuesWithEvictedEntries:219
Value: 295 was not found!
InfinispanNodeFailureTest.killedNodeDoesNotBreakReplaceCommand:135
expected:<false> but was:<true>
Failed tests:
DistL1WriteSkewTest>AbstractClusteredWriteSkewTest.testConditionalPutFailWriteSkewWithPassivation:184->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
RecoveryEnabledWriteSkewTest>AbstractClusteredWriteSkewTest.testRemoveFailWriteSkewWithPassivationOnNonOwner:160->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
Failed tests:
NonTxBackupOwnerBecomingPrimaryOwnerTest.testPrimaryOwnerChangingDuringPutOverwrite:75->doTest:165
» Runtime
ClusteredTxConditionalCommandTest>ClusteredConditionalCommandTest.testPutIfAbsentOnNonOwnerShared:246->ClusteredConditionalCommandTest.doTest:121->assertLoadAfterOperation:46
primary owner load expected:<1> but was:<0>
ReplCommandForwardingTest.testForwardToJoinerAsyncPrepare:119->testForwardToJoinerAsyncTx:161
» IllegalState
Failed tests:
org.infinispan.functional.FunctionalCachestoreTest.testWriteLoad[passivation=false](org.infinispan.functional.FunctionalCachestoreTest)
Run 1: PASS
Run 2: PASS
Run 3: PASS
Run 4: PASS
Run 5: PASS
Run 6: PASS
Run 7: PASS
Run 8: PASS
Run 9: FunctionalCachestoreTest.testWriteLoad:58->lambda$testWriteLoad$3:58
FunctionalCachestoreTest[passivation=false]-NodeB-46507 expected
[false] but found [true]
Run 10: PASS
Run 11: PASS
Run 12: PASS
Run 13: PASS
Run 14: PASS
Run 15: PASS
Run 16: PASS
org.infinispan.functional.FunctionalCachestoreTest.testWriteLoad[passivation=true](org.infinispan.functional.FunctionalCachestoreTest)
Run 1: PASS
Run 2: PASS
Run 3: PASS
Run 4: PASS
Run 5: PASS
Run 6: PASS
Run 7: PASS
Run 8: PASS
Run 9: PASS
Run 10: PASS
Run 11: PASS
Run 12: PASS
Run 13: PASS
Run 14: FunctionalCachestoreTest.testWriteLoad:58->lambda$testWriteLoad$3:58
FunctionalCachestoreTest[passivation=true]-NodeB-39167 expected
[false] but found [true]
Run 15: PASS
Run 16: PASS
DistTotalOrderL1WriteSkewTest>AbstractClusteredWriteSkewTest.testConditionalPutFailWriteSkewWithPassivation:184->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
DistTotalOrderWriteSkewTest>AbstractClusteredWriteSkewTest.testConditionalRemoveFailWriteSkewWithPassivation:200->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
Tests run: 8427, Failures: 4, Errors: 0, Skipped: 0
Failed tests:
SecureServerFailureRetryTest>HitsAwareCacheManagersTest.createBeforeMethod:114->MultipleCacheManagersTest.createBeforeMethod:119->MultipleCacheManagersTest.callCreateCacheManagers:109->AbstractRetryTest.createCacheManagers:63->createStartHotRodServer:27
» IllegalState
Failed tests:
DistWriteSkewTest>AbstractClusteredWriteSkewTest.testConditionalPutFailWriteSkewWithPassivationOnNonOwner:192->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
DistWriteSkewTest>AbstractClusteredWriteSkewTest.testConditionalReplaceWriteSkewWithPassivationOnNonOwner:220->AbstractClusteredWriteSkewTest.doTestWriteSkewWithPassivation:316
The key was not evicted after 10 inserts
Tests run: 8457, Failures: 2, Errors: 0, Skipped: 0
AFAIR in one case it was able to go beyond infinispan-core, in all
others these are failures in core.
Thanks,
Sanne
7 years, 8 months
Default TCP configuration is broken.
by Pedro Ruivo
Hi team,
The 'default-jgroups-tcp.xml" has MFC protocol without the FRAG2/3
protocol. This is broken when we send a multicast message larger than
'max-credits'. It will block forever in MFC [1]. No timeouts since we
don't have the CompletableFuture at this point.
Possible solutions are:
#1 put back FRAG2/3
advantage: we have multicast flow control.
disadvantage: all messages are fragmented (unicast and multicast), that
probably requires more resources (more messages in NAKACK and UNICAST
tables?)
#2 remove MFC
advantage: probably low resources usages. TCP will handle any fragmentation.
disadvantage: we don't have multicast flow control.
#3 alternative?
Cheers,
Pedro
[1] actually, I need a thread dump to confirm it.
7 years, 8 months
Classloader leaks?
by Sanne Grinovero
Hi all,
our documentation suggest to raise the file limits to about 16K:
http://infinispan.org/docs/stable/contributing/contributing.html#running_...
I already have this setup since years, yet I've been noticing errors such as:
"Caused by: java.io.IOException: Too many open files"
Today I decided to finally have a look, and I see that while running
the testsuite, my system's consumption of file descriptor raises
continuously, up to more than 2 millions.
(When not running the suite, I'm consuming 200K - that's including
IDEs and other FD hungry systems like Chrome)
Trying to get some samples of these file descriptors, it looks like
it's really about open files. Jar files to be more precise.
What puzzles me is that taking just one jar - jgroups for example - I
can count 7852 open instances of it, but distributed among a handful
of processes only.
My guess is classloaders aren't being closed?
Also: why did nobody else notice problems? Do you all have
reconfigured your system for unlimited FDs?
Thanks,
Sanne
7 years, 8 months
Major version cleaning
by Tristan Tarrant
Hi guys, we discussed about this a little bit in the past and this
morning on IRC. Here are some proposed removals:
- Remove the async transactional modes, as they are quite pointless
- Remove batching: users should use transactions
- Remove the tree module: it doesn't work properly, and uses batching
Please cast your votes
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
7 years, 8 months