PutAll command
by Pierre Sutra
Hello,
I would like to know if it is possible to execute a putAll(Map M)
command in embedded node via IP multicast. More precisely, I wonder if
instead of sending the map M to each node iteratively, there is a way to
send it to all nodes with IP multicast, each node projecting M on the
data it replicates.
I thank you in advance for your help.
Cheers,
Pierre
9 years, 9 months
New configuration
by Radim Vansa
Hi,
looking on the new configuration parser, I've noticed that you cannot
configure ConsistentHashFactory anymore - is this by purpose?
Another my concern is the fact that you enable stuff by parsing the
element - for example L1. I expect that omitting the element and setting
it with the default value (as presented in XSD) makes no difference, but
this is not how current configuration works.
My opinion comes probably too late as the PR was already reviewed,
discussed and integrated, but at least, please clearly describe the
behaviour in the XSD. The fact that l1-lifespan "Defaults to 10
minutes." is not correct - it defaults to L1 being disabled.
Thanks
Radim
--
Radim Vansa <rvansa(a)redhat.com>
JBoss DataGrid QA
9 years, 9 months
Writing a custom CacheStore: MarshalledEntryFactory
by Sanne Grinovero
Hi all,
I was toying with a custom CacheStore experiment, and am having some
friction with some of the new SPIs.
So interface org.infinispan.marshall.core.MarshalledEntryFactory<K, V>
is an helper to use in the CacheStorei implementation, which exposes
three methods:
MarshalledEntry<K,V> newMarshalledEntry(ByteBuffer key, ByteBuffer
valueBytes, ByteBuffer metadataBytes);
MarshalledEntry<K,V> newMarshalledEntry(Object key, ByteBuffer
valueBytes, ByteBuffer metadataBytes);
MarshalledEntry<K,V> newMarshalledEntry(Object key, Object value,
InternalMetadata im);
In my CacheStore - and I suspect most efficiency minded
implementations - I don't care about the value Object but I express a
specific physical layout for the metadata, so to run for example an
efficient "purge expired" task.
So, the key is given, the value Object needs to be serialized, but the
InternalMetadata I can map to specific fields.
Problem is at read time: I don't have a marshalled version of the
Metadata but I need to unmarshall the value.. there is no helper to
cover for this case.
Wouldn't this interface be more practical if it had:
Object unMarshallKey(ByteBuffer);
Object unMarshallValue(ByteBuffer);
InternalMetadata unMarshallMetadata(ByteBuffer);
MarshalledEntry newMarshalledEntry(Object key, Object value,
InternalMetadata im);
Also, I'd get rid of generics. They are not helping at all, I can
hardly couple my custom CacheStore implementation to the end user's
domain model, right?
I was also quite surprised that other existing CacheStore
implementations don't have this limitation; peeking in the
JDBCCacheStore to see how this is supposed to work, it seems that
essentially it duplicates the data by serializazing the
InternalMetadata in the BLOB but also stored an Expiry column to query
via SQL. I was interested to see how the Purge method could be
implemented efficiently, and found a "TODO notify listeners" ;-)
All other JDBC based stores serialize buckets in groups, REST store
doesn't do purging, LevelDB also does duplication for the metadata,
Cassandra is outdated and doesn't do events on expiry.
9 years, 9 months
Clustered Listener
by Pierre Sutra
Hello,
As part of the LEADS project, we have been using recently the clustered
listeners API. In our use case, the application is employing a few
thousands listeners, constantly installing and un-installing them. The
overall picture is that things work smoothly up to a few hundreds
listeners, but above the cost is high due to the full replication
schema. To sidestep this issue, we have added a mechanism that allows
listening only to a single key. In such a case, the listener is solely
installed at the key owners. This greatly helps the scalability of the
mechanism at the cost of fault-tolerance since, in the current state of
the implementation, listeners are not forwarded to new data owners.
Since as a next step [1] it is planned to handle topology change, do you
plan also to support key (or key range) specific listener ? Besides,
regarding this last point and the current state of the implementation, I
would have like to know what is the purpose of the re-installation of
the cluster listener in case of a view change in the addedListener()
method of the CacheNotifierImpl class. Many thanks in advance.
Best,
Pierre Sutra
[1]
https://github.com/infinispan/infinispan/wiki/Clustered-listeners#handlin...
9 years, 9 months
infinispan test suite, reloaded
by Mircea Markus
I just had a chat with Dan and we don't think the current process for the test suite works. Not hard to see why, the suite is almost never green. So we will adopt a more classic and simple approach: if a test fails a blocker JIRA is created for it and assigned to a component lead then team member who'll start working on it *immediately*. Dan will be watch dog starting today so please expect blocker JIRAs coming your way and treat them accordingly.
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
9 years, 9 months
Multicast routing on Max OS X
by Bela Ban
I added some bits of advice for configuration of IP multicast routes on
Mac OS X.
This is probably only of concern to those who want to bind to the
loopback device (127.0.0.1) and multicast locally, e.g. for running the
test suite.
It is beyond me why a node cannot bind to 127.0.0.1 and use the default
route (0.0.0.0) for multicasting, e.g. if no multicast route has been
defined). This works perfectly on other operating systems. If you know,
please share the solution; then [1] would not be needed...
See [1] for details.
[1] https://issues.jboss.org/browse/JGRP-1808
--
Bela Ban, JGroups lead (http://www.jgroups.org)
9 years, 9 months
Where's the roadmap?
by Sanne Grinovero
I was asked about the Infinispan roadmap on a forum post, my draft reads:
"Sure it's available online, see.."
but then I could actually only find this:
https://community.jboss.org/wiki/InfinispanRoadmap
(which is very outdated).
So, what's the roadmap?
Would be nice if we could have it updated and published on the new website.
Cheers,
Sanne
9 years, 9 months
Issue with JGroups config files in ispn-core
by Martin Gencur
Hi,
let me mention an issue that several people faced in the past,
independently of each other:
A user app uses a custom JGroups configuration file. However, they
choose the same name as the files which we bundle inside
infinispan-core.jar.
Result? People are wondering why their custom configuration does not
take effect.
Reason? Infinispan uses the default jgroups file bundled in infinispan-core
Who faced the issue? (I suppose it's just a small subset:)) Me, Radim,
Alan, Wolf Fink
I believe a lot of users run into this issue.
We were considering a possible solution and this one seems like it could
work (use both 1) and 2)):
1) rename the config files in the distribution e.g. this way:
jgroups-ec2.xml -> default-jgroups-ec2.xml
jgroups-udp.xml -> default-jgroups-udp.xml
jgroups-tcp.xml -> default-jgroups-tcp.xml
Any other suggestions? internal-jgroups-udp.xml ?
dontEverUseThisFileInYourAppAsTheCustomConfigurationFile-jgroups-udp.xml
? (joke)
(simply something that users would automatically like to change once
they use it in their app)
2) Throw a warning whenever a user wants to use a custom jgroups
configuration file that has the same name as one of the above
WDYT?
Thanks!
Martin
9 years, 9 months
LevelDB & expirationQueue
by Pedro Ruivo
Hi,
I found a couple of issue with the expirationQueue in leveldb.
AFAIK, this queue has the goal to avoid 2 writes to leveldb per
infinispan write. correct me if I'm wrong.
Also, it is drain when the eviction thread is trigger (every minute by
default).
#1 the queue is is only drained when the eviction thread is triggered.
It is difficult to configure a queue-size + wake-up interval for all the
possible workloads.
A possible solution is to use an internal thread in LevelDBStore to
drain this queue.
#2 It is possible to write to leveldb asynchronously. So why can't we
remove the queue? Do we have some performance numbers that shows a
degradation without the queue?
Thoughts?
Cheers,
Pedro
9 years, 10 months