problems configuring distributed mode
by Anders
Hi,
Im getting the following error whenever i try to write to inifnispan
over memcached:
java.lang.IllegalStateException: If clustered, Version prefix
cannot be 0. Rank calculator probably not in use.
This is my relevant inifnispan-config.xml file:
<clustering mode="distribution">
<async/>
<hash numOwners="1"/>
</clustering>
im using jgroups udp transport, but ive tried ec2 and tcp...all with the
same result.
ive also tried different clustering modes. the only one that works is
local...but that kind of deceives the purpose of what we need to do.
Ive downloaded the src code and found a ClusterId generator class and
attached it (programatically) like so:
cacheManager.addListener(new
ClusterIdGenerator().getRankCalculatorListener());
cacheManager.start();
any help figuring out how to get clustered mode running would be much
appreciated.
-A
13 years, 7 months
[ISPN-78] RFC: Finalizing API
by Olaf Bergner
Continuing to work on large object support I'm now at a point where I
would like to finalize the API so that I'm free to move forward with
some confidence. This is its current incarnation
public interface StreamingHandler<K> {
void writeToKey(K key, InputStream largeObject);
OutputStream writeToKey(K key);
InputStream readFromKey(K key);
boolean removeKey(K key);
StreamingHandler<K> withFlags(Flag... flags);
}
where a user obtains a StreamingHandler through calling
Cache.getStreamingHandler(). The StreamingHandler manages large objects
on behalf of the backing cache. This doesn't look too bad to me, but
there's always room for improvement.
First a fundamental question with potentially disruptive implications:
what aspires Large Object Support to become in the long run? A
comfortable means for users to store and retrieve large objects in
Infinspan (as it seems today)? Or rather a fully fledged distributed
file system? I favor the former, without precluding the possibility that
one day Infinispan will also implement a file system interface.
Now a question regarding the implementation the answer to which might
affect the API: does it make sense to strictly separate "regular" caches
from those dealing with large objects? I think so, since I presume that
most applications will treat large objects differently from the more
comfortably sized ones. At least that is my personal experience.
Furthermore it might prove difficult to tune a cache that contains both
regular and large objects. Plus by introducing large object caches we
might be able to find a nice set of default settings for those.
If we chose to introduce dedicated large object caches I would opt for
introducing StreamingCache<K> or even LargeObjectCache<K> instead of
StreamingHandler<K> since then a StreamingHandler wouldn't handle large
objects on behalf of some backing cache. Rather it would act as *the*
interface to a cache exclusively reserved for large objects. It follows
that a user would directly access a StreamingCache, not indirectly via
Cache.getStreamingHandler().
And finally there is that eternal question of how to properly name those
interface methods. Trustin has expressed his concern that
writeToKey(...), removeKey() and readFromKey(...) might fail to convey
their semantics. And indeed there's nothing in their names to tell a
user that they deal with large objects. What about alternatives á là
- void storeLargeObject(K key, InputStream largeObject) or
putLargeObject(K key, InputStream largeObject)
- OutputStream newLargeObject(K key) or simply OutputStream
largeObject(K key)
- InputStream getLargeObject(K key)
- boolean removeLargeObject(K key)
? Rack your brains and keep those splendid ideas coming.
Cheers,
Olaf
13 years, 7 months
...quick follow up
by Anders
forgot to mention that ive tried this on
4.2.1.FINAL
5.0.0.CR2
5.0.0.CR3
-W
13 years, 7 months
Annotation Processor on Eclipse
by Israel Lacerra
Hi everybody. Does anyone have problems with JBoss Logging on Eclipse? The
option Annotation Processing does not appear on Java Compiler settings. Maybe
I have to download a plugin... or anything else. But I did not found
anything until now....
Israel
13 years, 7 months
Questions on AtomicMap and improvement proposals
by Emmanuel Bernard
In exchange for answers, I will improve at least the JavaDoc and maybe create a wiki from this info.
(note that this has nothing to do with the ongoing discussion on key-levle locks for a sister of AtomicMap)
I've tried to search the wiki but found nothing on AtomicMap (only forum posts) and I've read the AtomicMap JavaDoc but I am still a bit unsure of what's going on.
When I need to create an AtomicMap, I must use
AtomicMap<String, Object> resultset = AtomicMapLookup.getAtomicMap(cache, "my_atomic_map_key");
Questions
Here are a few questions for you
1. How can I apply advanced flags to get or create the AtomicMap?
cache.withFlag(SKIP_CACHE_STORE,FORCE_ASYNCHRONOUS).get("my_atomic_map_key");
3. Is that legal to use other cache methods like replace on an AtomicMap value, what are the consequences?
3. When I expect a map to be present in the key, can I use cache.get("my_atomic_map_key") and still benefit from my properly proxied AtomicMap? Or am I forced to use AtomicMapLookup.getAtomicMap?
4. Using AtomicMapLookup.getAtomicMap, how can I differentiate an empty key where a new map has been created from a retrieval of an empty map stored in the key?
5. Must I use AtomicMapLookup.remove method to remove an atomic map?
6. Can I clone an AtomicMap?
For using methods like replace(), it seems one would need to clone the atomicmap to compare it to the initial value. When I tried briefly Iw as unsure I could do that.
Proposed improvements
I'm asking all the questions because these can potentially make AtomicMap users life quite hard and abstracting Infinispan from OGM will make a damn ugly interface / contract
1. Would it be possible to let people get their proxied AtomicMap from cache.get() and other get methods?
It seems that either the marshaller or the get operations (probably the get ops as local cache should work as well) should be able to intercept an AtomicMap and essentually do what AtomicMapLookup is doing (and make sure it is properly wrapped).
If that's possible that would be already a big benefit for the user as it would only be limited to AtomicMap creation or cloning
2. Would it be possible to let people create standalone AtomicMaps and associate them to the cache lasily and in a transparent fashion.
AtomicMap<String, Object> resultSet = AtomicMapLookup.createAtomicMap();
cache.put(key, resultSet); //lazily set cache, deltaMapKey, batchContainer and icc for new AtomicMaps
3. Could we guarantee some equality between Map and AtomicMap?
if I do Map
map = new HashMap(atomicMap);
atomicMap.put("key", "value");
cache.replace(key, map, atomicMap);
Am I guaranteed that it can work (if no one has pushed a change behind my back)?
Emmanuel
13 years, 7 months
JAXB help needed
by Pete Muir
As usual I'm struggling with JAXB. What I want to do is pretty trivial with a stream based parser:
1) parse a xs:sequence into a list of strings
2) receive notification that this list of strings has been parsed via a post parse callback
3) create a list of object instances (each string represents a class name)
4) set this into the domain model for configuration
I have absolutely no idea how I do this with jaxb (and no real desire to learn, which is probably the main problem ;-), so can someone help me?
For now I'll just add this to the fluent config and not expose it via xml.
13 years, 7 months
https://issues.jboss.org/browse/ISPN-977
by Mircea Markus
Hi,
The basic problem behind this is that I need to be notified when a new consistent hash is installed.
ATM there isn't any support (of which I know) for a "@ConsistenHashChangeListener".
I'm thinking to add such notifications either:
a) internally: Observer pattern on DistributionManager or even on DistributionManagerImpl
b) more generically, as a fully flagged listener.
I favor a) and then if more people ask for it we will expose it as a fully flagged listener.
Suggestions?
Cheers,
Mircea
13 years, 7 months
Invoking distributed exec and mapreduce over hotrod
by Vladimir Blagojevic
Galder,
I believe the ability to invoke distributed executors and mapreduce over
hotrod would be very interesting. However, I quickly realized that
internals of both DistributedExecutorService and MapReduceTask rely
heavily on some Cache internals (RpcManager, CommandsFactory,
InterceptoChain) that are only available in non-remote caches. There is
no way to fake this by simply passing RemoteCache instead of Cache.
Either we rethink the internals of DistributedExecutorService and
MapReduceTask or we somehow bridge to these abstractions from a thin
client.
Any thoughts how we could potentially achieve hotrodization of dist. exec?
Regards,
Vladimir
13 years, 7 months