Infinispan embedded off-heap cache
by yavuz gokirmak
Hi all,
Is it possible to use infinispan as embedded off-heap cache.
As I understood it is not implemented yet.
If this is the case, we are planning to put effort for off-heap embedded
cache development.
I really need to hear your advices,
best regards
10 years, 9 months
Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
Remote Listeners WAS: Question regarding Hotrod
by Manik Surtani
Adding infinispan-dev - please use the mail list for such discussions in future.
I'll let Mircea and Galder comment some more re: timeframes for remote listeners, but I don't think this has been scheduled for a specific release as yet. As always, if you're willing to help implement this feature, I'm sure Galder will be able to guide you through his design ideas and help you with code review, etc.
Cheers
Manik
On 27 Aug 2013, at 20:51, SUTRA Pierre <pierre.sutra(a)unine.ch> wrote:
> Dear Manik,
>
> I am currently implementing the Leads storage manager on top of Infinispan. The idea is to make an Infinispn module, and precisely implemeting an AdvancedCache object on top of a set of remote caches.
>
> Among the Infinispan functionnalities which would be necessary, there are the remoteCache listeners. This feature is planned for Hotrod 2.0. Could you please tell me at which date, do you think, this version will be available ? Besides, do you know how difficult it is to implement this feature ?
>
> Thank you !
> Best,
>
> Pierre
--
Manik Surtani
11 years, 3 months
CacheLoader/CacheWriter
by Mircea Markus
That's the name-pair used in JSR-107.
Cache*Reader* and CacheWriter sounds way more natural to me.
!load = offload
!read = write
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 3 months
Atomic Objects on top of Infinispan
by Pierre Sutra
Hello,
As part of the LEADS project (http://www.leads-project.eu), I
implemented a factory of atomic objects on top of Infinispan. The code
is available in the classes AtomicObject*.java stored in the directory
https://github.com/otrack/Leads-infinispan/tree/master/core/src/main/java...
of my Github clone of infinispan. The tests are written in the class
AtomicObjectFactoryTest.java, available under
core/src/test/java/org/infinispan/atomic. Besides, you may find below a
simple code snippet:
AtomicObjectFactory factory = new AtomicObjectFactory(c1); // c1 is both
synchronous and transactional
Set set = (Set) factory.getOrCreateInstanceOf(HashSet.class, "k"); // k
is the key to store the variable set inside the cache c1
set.add("smthing"); // some call examples
System.out.println(set.toString())
set.addAll(set);
factory.disposeInstanceOf(HashSet.class, "set", true); // to store the
object in a persistent way
The pattern I followed to implement this facility is the state machine
replication approach. More precisely, the factory is built on top of the
transactional facility of Infinispan. When the factory creates an
object, it stores a local copy of the object and registers a proxy as a
cache listener. A call to some method of an object is serialized in a
transaction consisting of a single put operation. When the call is
de-serialized it is applied to the local copy, and in case the calling
process was local, the result of the call is returned (this mechanism is
implemented via a future object). Notice that this implies that all the
arguments of the methods of the object are serializable, as well as the
object itself. The current implementation supports the retrieval of
persisted objects, an optimization mechanism for read operations, and is
elastic (an atomic object supports on the fly the addition/removal of
local and/or distant threads accessing it).
I hope that this work might interest you.
Best,
Pierre
11 years, 3 months
CacheLoader and CacheStore
by Mircea Markus
Hi,
Apologies for the long email :-)
There have been several discussions around how the CacheStore and CacheLoader functionality should look in the new CacheLoader API.
Here's are the possible approaches:
1. Have CacheLoader and CacheWriter as independent interfaces, the way JSR 107 does it ([1][2]). Note that CacheWriter does not extend CacheLoader.
Pros:
a. [major] follows the JSR-107 standard, in future people might be used to this way of implementing stuff
b. [minor] a cleaner design: people can only implement a CacheLoader if all they do is load data
Cons:
c. tricky to configure in XML: we use the "loader" tag for configuring a CacheLoader. A "writer" (or "store" as we do now) tag for configuring a CacheWriter. But what are we going to use in order to configure something that implements both CacheLoader and CacheWriter? "writer" as we do now? or allow both? or require one to configure the same entity both as a "loader" and as a "writer"? The later would make the most sense but I think would result in a configuration nightmare.
d. The terms "cache loader" and "cache store" are used interchangeably which causes confusion through the users.
2. Have a sigle interface that exposes the all the methods from CacheLoader and CacheWriter. (Name it CacheLoader?)
Pros:
a. [major] clear and simple configuration. avoid confusion throughout the users
b. [minor] most of the API implementors implement both loaders and stores. They'd only have to deal with a single SPI interface for this
Cons:
c. doesn't follow exactly JSR-107's way of doing things.
d. people that only need to load data would need to leave the store methods empty. Not as nice as having a specific interface for it.
3. Stick to the current approach of having CacheWriter extends CacheLoader
Pros:
a. [minor] a cleaner design: people can only implement a CacheLoader if all they do is load data
b. [minor] clear configuration. we'd use "loader" and "writer" tags (as we do now)
Cons:
c. The terms "cache loader" and "cache store" are used interchangeably which causes confusion through the users
My personal preference is for 2 because of simplicity.
Opinions?
[1] https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cach...
[2] https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cach...
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 3 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 3 months