Infinispan embedded off-heap cache
by yavuz gokirmak
Hi all,
Is it possible to use infinispan as embedded off-heap cache.
As I understood it is not implemented yet.
If this is the case, we are planning to put effort for off-heap embedded
cache development.
I really need to hear your advices,
best regards
10 years, 9 months
Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
CS API still needs more polishing?
by Galder Zamarreño
Hi,
Doing this throws a warning due to unchecked assignment:
ExternalStore<Integer, String> boundedStore = new SingleFileStore();
Granted, the users won't be doing this, but should all cache stores use generics properly? At the end of the day, implementations such as SingleFileStore are what cache store developers are gonna be looking at to get inspiration of how to write cache stores.
I'm aware that some of the internals assume Object keys and values, and there will be a point where an unchecked cast will need to be done, but this should happen within the internals of our cache implementation.
Cache store implementations should use generics properly.
Cheers,
p.s. Or maybe we should move to language were generics are properly enforced ;)
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 2 months
input JSON -> convert -> put into ISPN cache & ready for Queries
by Tomas Sykora
Hi team!
I need to ask for your help.
It's connected to the OData endpoint. (https://issues.jboss.org/browse/ISPN-2109) I was thinking about the design etc. and it would be nice to map OData queries to Infinispan queries so clients can get their results based on particular query.
You know, there is basically not much to do with only schema-less key-value store. Like exposing only values to clients based on their key requests does not fully use OData capabilities.
So I was thinking about something like that...
>From any client you are sending JSON object (for example a Book, with variables: title, author, description) to OData service and would like to store query-able Book Object value into the cache "under" some key.
So you go: JSON --> to query-able Book.class Object --> cache.put("key", bookFromJson);
Then in pseudo query: get-me-books-filter-description-contains-"great IT book"-top-N-results --> issue query on cache, get results --> transform returned Books.class into JSON, return to client
My question is:
How to transform JSON input, which is in most cases simple String build according to JSON rules, into object, which is query-able and can be put into the cache.
The thing is that you usually have java class:
@Indexed
Book {
@Filed String title;
@Filed String author;
etc. etc.
I simply don't know how to create an object ready for queries, or even annotated class and instantiate it for further put into the cache.
I'm discovering this, recently: http://www.jboss.org/javassist
Or can you see there any other, maybe totally different, approach how to do it?
THANK YOU very much for any input!
I'm stuck on this right now... that's why I'm asking for a help.
Have a nice day all!
Tomas
11 years, 3 months
Infinispan 6.0.0.Beta2 is released!
by Vladimir Blagojevic
Dear Infinispan community,
Staying committed to the software development philosophy of "Release
early. Release often. And listen to your customers" we are releasing
Infinispan 6.0.0.Beta2 today. This is mainly a stabilization release
after a flurry of new features released in Beta1. The Beta2 release
contains a few fixes related to hotrod remote clients as well as some
minor fixes in LevelDB cache store.
For a complete list of features and fixes included in this release
please refer to the release notes. Visit our downloads section to find
the latest release and if you have any questions please check our
forums, our mailing lists or ping us directly on IRC.
Cheers,
Vladimir
11 years, 3 months
Infinispan 6.0.0.Beta2 release is tomorrow (Friday Sept 27th)
by Vladimir Blagojevic
Guys,
Please have a look at outstanding issues for Beta2 at
http://goo.gl/nS4EuV The target time to start the release is 9am EST
which should be almost end of the day in Europe - so please have your
PRs reviewed and ready for integration by then. I just spoke with
Adrian, Dan and Mircea who are hacking together on some critical
remaining JIRAs for this release. What is your progress and what can we
expect to have in by tomorrow guys? Pedro has some critical issues as
well, what is their status Pedro? Tristan, Galder and William have some
issues on their plates as well but they are not critical.
Regards,
Vladimir
11 years, 3 months
Basic issue with replicated sync caches
by Giovanni Meo
Hi infinispan-dev,
i'm having a basic issue with infinispan and i wonder if i can get
some lead on what to look next. I have a cache configured in
replicated/sync mode on a cluster made of 3 nodes.
- Node1 writes a key/value in the cache
- Node2 gets it because i have registered a listener for it and i see
the message being logged
- Node3 never gets it
no error is raised anywhere. I'm using Infinispan 5.3.0.
> osgi> cacheinfo default frm.workOrder
> Info for cache frm.workOrder on container default
> LOCKING_PROP = LockingConfiguration{concurrencyLevel=32, isolationLevel=READ_COMMITTED, lockAcquisitionTimeout=10000, useLockStriping=false, writeSkewCheck=false}
> TRANSACTION_PROP = TransactionConfiguration{autoCommit=true, cacheStopTimeout=30000, eagerLockingSingleNode=false, lockingMode=OPTIMISTIC, syncCommitPhase=true, syncRollbackPhase=false, transactionManagerLookup=org.infinispan.transaction.lookup.GenericTransactionManagerLookup@1789ff2c, transactionSynchronizationRegistryLookup=null, transactionMode=NON_TRANSACTIONAL, useEagerLocking=false, useSynchronization=true, recovery=RecoveryConfiguration{enabled=true, recoveryInfoCacheName='__recoveryInfoCacheName__'}, reaperWakeUpInterval=1000, completedTxTimeout=15000, use1PcForAutoCommitTransactions=false}
> CLUSTERING_PROP = ClusteringConfiguration{async=AsyncConfiguration{asyncMarshalling=false, replicationQueue=null, replicationQueueInterval=5000, replicationQueueMaxElements=1000, useReplicationQueue=false}, cacheMode=REPL_SYNC, hash=HashConfiguration{consistentHashFactory=null, hash=MurmurHash3, numOwners=2, numSegments=60, groupsConfiguration=GroupsConfiguration{enabled=false, groupers=[]}, stateTransferConfiguration=StateTransferConfiguration{chunkSize=10000, fetchInMemoryState=true, originalFetchInMemoryState=null, timeout=240000, awaitInitialTransfer=true, originalAwaitInitialTransfer=null}}, l1=L1Configuration{enabled=false, invalidationThreshold=0, lifespan=600000, onRehash=false, cleanupTaskFrequency=600000}, stateTransfer=StateTransferConfiguration{chunkSize=10000, fetchInMemoryState=true, originalFetchInMemoryState=null, timeout=240000, awaitInitialTransfer=true, originalAwaitInitialTransfer=null}, sync=SyncConfiguration{replTimeout=15000}}
The cache has this caractestics
Thanks in advance for any leads on what to look for debugging further the issue,
Giovanni
--
Giovanni Meo
Via del Serafico, 200 Telephone: +390651644000
00142, Roma Mobile: +393480700958
Italia Fax: +390651645917
VOIP: 8-3964000
“The pessimist complains about the wind;
the optimist expects it to change;
the realist adjusts the sails.” -- Wm. Arthur Ward
IETF credo: "Rough consensus and running code"
11 years, 3 months