Infinispan embedded off-heap cache
by yavuz gokirmak
Hi all,
Is it possible to use infinispan as embedded off-heap cache.
As I understood it is not implemented yet.
If this is the case, we are planning to put effort for off-heap embedded
cache development.
I really need to hear your advices,
best regards
10 years, 9 months
Ditching ASYNC modes for REPL/DIST/INV/CacheStores?
by Galder Zamarreño
Hi all,
The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality.
Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO.
WDYT?
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
10 years, 10 months
Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
New Cache Entry Notifications
by William Burns
Hello all,
I have been working with notifications and most recently I have come
to look into events generated when a new entry is created. Now
normally I would just expect a CacheEntryCreatedEvent to be raised.
However we currently raise a CacheEntryModifiedEvent event and then a
CacheEntryCreatedEvent. I notice that there are comments around the
code saying that tests require both to be fired.
I am wondering if anyone has an objection to only raising a
CacheEntryCreatedEvent on a new cache entry being created. Does
anyone know why we raise both currently? Was it just so the
PutKeyValueCommand could more ignorantly just raise the
CacheEntryModified pre Event?
Any input would be appreciated, Thanks.
- Will
10 years, 11 months
L1OnRehash Discussion
by William Burns
Hello everyone,
I wanted to discuss what I would say as dubious benefit of L1OnRehash
especially compared to the benefits it provide.
L1OnRehash is used to retain a value by moving a previously owned
value into the L1 when a rehash occurs and this node no longer owns
that value Also any current L1 values are removed when a rehash
occurs. Therefore it can only save a single remote get for only a few
keys when a rehash occurs.
This by itself is fine however L1OnRehash has many edge cases to
guarantee consistency as can be seen from
https://issues.jboss.org/browse/ISPN-3838. This can get quite
complicated for a feature that gives marginal performance increases
(especially given that this value may never have been read recently -
at least normal L1 usage guarantees this).
My first suggestion is instead to deprecate the L1OnRehash
configuration option and to remove this logic.
My second suggestion is a new implementation of L1OnRehash that is
always enabled when L1 threshold is configured to 0. For those not
familiar L1 threshold controls whether invalidations are broadcasted
instead of individual messages. A value of 0 means to always
broadcast. This would allow for some benefits that we can't currently
do:
1. L1 values would never have to be invalidated on a rehash event
(guarantee locality reads under rehash)
2. L1 requestors would not have to be tracked any longer
However every write would be required to send an invalidation which
could slow write performance in additional cases (since we currently
only send invalidations when requestors are found). The difference
would be lessened with udp, which is the transport I would assume
someone would use when configuring L1 threshold to 0.
What do you guys think? I am thinking that no one minds the removal
of L1OnRehash that we have currently (if so let me know). I am quite
curious what others think about the changes for L1 threshold value of
0, maybe this configuration value is never used?
Thanks,
- Will
10 years, 11 months
reusing infinispan's marshalling
by Adrian Nistor
Hi list!
I've been pondering about re-using the marshalling machinery of
Infinispan in another project, specifically in ProtoStream, where I'm
planning to add it as a test scoped dependency so I can create a
benchmark to compare marshalling performace. I'm basically interested
in comparing ProtoStream and Infinispan's JBoss Marshalling based
mechanism. Comparing against plain JBMAR, without using the
ExternalizerTable and Externalizers introduced by Infinispan is not
going to get me accurate results.
But how? I see the marshaling is spread across infinispan-commons and
infinispan-core modules.
Thanks!
Adrian
10 years, 11 months
Kyro performance (Was: reusing infinispan's marshalling)
by Sanne Grinovero
Changing the subject, as Adrian will need a reply to his (more
important) question.
I don't think we should go shopping for different marshaller
implementations, especially given other priorities.
I've been keeping an eye on Kryo since a while and it looks very good
indeed, but JBMarshaller is serving us pretty well and I'm loving its
reliability.
If we need more speed in this area, I'd rather see us perform some
very accurate benchmark development and try to understand why Kyro is
faster than JBM (if it really is), and potentially improve JBM.
For example as I've already suggested, it's using an internal
identityMap to detect graphs, and often we might not need that, or
also it would be nice to refactor it to write to an existing byte
stream rather than having it allocate internal buffers, and finally we
might want a "stateless edition" so to get rid of need for pooling of
JBMar instances.
-- Sanne
On 31 January 2014 16:29, Vladimir Blagojevic <vblagoje(a)redhat.com> wrote:
> Not 100% related to what you are asking about but have a look at this
> post and the discussion that "erupted":
>
> http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-fast...
>
> Vladimir
> On 1/30/2014, 7:13 AM, Adrian Nistor wrote:
>> Hi list!
>>
>> I've been pondering about re-using the marshalling machinery of
>> Infinispan in another project, specifically in ProtoStream, where I'm
>> planning to add it as a test scoped dependency so I can create a
>> benchmark to compare marshalling performace. I'm basically interested
>> in comparing ProtoStream and Infinispan's JBoss Marshalling based
>> mechanism. Comparing against plain JBMAR, without using the
>> ExternalizerTable and Externalizers introduced by Infinispan is not
>> going to get me accurate results.
>>
>> But how? I see the marshaling is spread across infinispan-commons and
>> infinispan-core modules.
>>
>> Thanks!
>> Adrian
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
10 years, 11 months
JPA Store -> Hibernate Store?
by Radim Vansa
Hi,
as I am upgrading the JPA Store to work with Infinispan 6.0 SPI, there
have been several ideas/recommendations to use Hibernate-specific API
[1][2]. Currently, the code uses javax.persistence.* stuff only
(although it uses on hibernate implemenation).
What do you think, should we:
a) stay with javax.persistence only
b) use hibernate API, if it offers better performance / gets rid of some
problems -> should we then rename the store to
infinispan-persistence-hibernate? Or is the Hibernate API an
implementation detail?
c) provide performant (hibernate) and standard implementation?
My guess is b) (without renaming) as the main idea should be that we can
store JPA objects into relational DB
Radim
[1] https://issues.jboss.org/browse/ISPN-3953
[2] https://issues.jboss.org/browse/ISPN-3954
--
Radim Vansa <rvansa(a)redhat.com>
JBoss DataGrid QA
10 years, 11 months