Configuration visitor - Re: [JBoss JIRA] Commented: (ISPN-145) No transport and singleton store enabled should not be allowed
by Vladimir Blagojevic
Hi,
Galder and I talked about this offline. Time to involve you guys!
I just completed visitor pattern for our configuration objects. Visitor
is passed from root of configuration - InfinispanConfiguration object.
InfinispanConfiguration class has a new method:
public void accept(ConfigurationBeanVisitor v)
How do we want to integrate this visitor into existing structure?
1) We add a new factory method to InfinispanConfiguration with
additional ConfigurationBeanVisitor parameter
2) We leave everything as is and if there is a need to pass some visitor
we pass it to InfinispanConfiguration instance directly (from
DefaultCacheManager)
DefaultCacheManager will pass ValidationVisitor to
InfinispanConfiguration that will verify configuration semantically.
Regards,
Vladimir
On 09-09-09 10:19 AM, Galder Zamarreno wrote:
> Good idea :)
>
> On 09/09/2009 04:13 PM, Vladimir Blagojevic wrote:
>> Yeah,
>>
>> I was thinking that we can make a visitor for configuration tree and
>> then you can do verification of any node and other things as well. Use
>> cases will come up in the future for sure.
>>
>> Cheers
>>
>>
>>
>> On 09-09-09 3:29 AM, Galder Zamarreno (JIRA) wrote:
>>> [
>>> https://jira.jboss.org/jira/browse/ISPN-145?page=com.atlassian.jira.plugi...
>>>
>>> ]
>>>
>>> Galder Zamarreno commented on ISPN-145:
>>> ---------------------------------------
>>>
>>> Not sure I understand what you mean by generic though. You mean any
>>> component to have a validation step of some sort?
>>>
>>> Thanks for taking this on :)
>>>
>>>> No transport and singleton store enabled should not be allowed
>>>> --------------------------------------------------------------
>>>>
>>>> Key: ISPN-145
>>>> URL: https://jira.jboss.org/jira/browse/ISPN-145
>>>> Project: Infinispan
>>>> Issue Type: Bug
>>>> Components: Loaders and Stores
>>>> Affects Versions: 4.0.0.ALPHA6
>>>> Reporter: Galder Zamarreno
>>>> Assignee: Vladimir Blagojevic
>>>> Priority: Minor
>>>> Fix For: 4.0.0.CR1
>>>>
>>>>
>>>> Throw configuration exception if singleton store configured without
>>>> transport having been configured.
>>>> It makes no sense to have singleton store enabled when there's no
>>>> transport.
>>
>
13 years, 2 months
Defining new commands in modules
by Manik Surtani
So this is an extension to the discussion around a GenericCommand that has been going around. IMO a GenericCommand is a big -1 from me for various reasons - the whole purpose of the command pattern is so we have strongly typed and unit testable commands. This will help the ongoing work by Mircea, Sanne and Israel on various modules that need to define custom commands.
I proposed the following solution to Mircea earlier today, I'll repeat here for you guys to discuss. Note that this is a *half baked* solution and needs more thought! :-)
* If a module needs to define custom commands, it should define its own ReplicableCommand implementations in its' own module.
* It should define a sub-interface to Visitor (MyModuleVisitor) with additional methods to handle the new commands
* Interceptors defined in this module should extend CommandInterceptor AND implement MyModuleVisitor
* These new commands can be created directly, or via a new CommandFactory specially for these commands.
Now for the un-finished bits. :)
* How does RemoteCommandFactory instantiate these new commands? The module should have a way of registering additional command IDs with RemoteCommandFactory.fromStream(). See
http://fisheye.jboss.org/browse/Infinispan/branches/4.2.x/core/src/main/j...
Perhaps RemoteCommandFactory.fromStream() should look up the ID in a map of command creator instances, and each module can register more of these with the RemoteCommandFactory?
* How do interceptors defined in the core module handle commands it isn't aware of? handleDefault()? Or should we define a new handleUnknown() method in Visitor for this case, which would default to a no-op in AbstractVisitor? E.g., in a module-specific command such as MyModuleCommand, I would implement:
class MyModuleCommand implements ReplicableCommand {
public Object acceptVisitor(InvocationContext ctx, Visitor visitor) throws Throwable {
if (Visitor instanceof MyModuleVisitor) {
return ((MyModuleVisitor) visitor).visitMyModuleCommand(ctx, this);
} else {
return visitor.handleUnknown(ctx, this);
}
}
}
Cheers
Manik
PS: There is no JIRA for this. If we like this approach and it works, I suggest we create a JIRA and implement it for 4.2. The impl should be simple once we resolve the outstanding bits.
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
13 years, 6 months
ISPN 200
by Israel Lacerra
Manik,
What you mean by:
" * The calling node returns a CacheQuery impl that lazily fetches
and collates results from the cluster." (JIRA)
Is enough if each node returns a list of keys and then, we lazily get the
values using the keys? Or the process has to be more lazy yet?
thanks!
Israel Lacerra
13 years, 11 months
Distributed execution framework - API proposal(s)
by Vladimir Blagojevic
Hey,
I spent the last week working on concrete API proposals for distributed
execution framework. I believe that we are close to finalize the
proposal and your input and feedback is important now! Here are the main
ideas where I think we made progress since we last talked.
Access to multiple caches during task execution
While we have agreed to allow access to multiple caches during task
execution including this logic into task API complicates it greatly. The
compromise I found is to focus all API on to a one specific cache but
allow access to other caches through DistributedTaskContext API. The
focus on one specific cache and its input keys will allows us to
properly CH map task units across Infinispan cluster and will cover most
of the use cases. DistributedTaskContext can also easily be mapped to a
single cache. See DistributedTask and DistributedTaskContext for more
details.
DistributedTask and DistributedCallable
I found it useful to separate task characteristics in general and actual
work/computation details. Therefore the main task characteristics are
specified through DistributedTask API and details of actual task
computation are specified through DistributedCallable API.
DistributedTask specifies coarse task details, the failover policy, the
task splitting policy, cancellation policy and so on while in
DistributedCallable API implementers focus on actual details of a
computation/work unit.
I have updated the original document [1] to reflect API update. You can
see the actual proposal in git here [2] and I have also included the
variation of this approach [3] that separates map and reduce task phases
with separate interfaces and removes DistributedCallable interaface. I
have also kept Trustin's ideas in another proposal [4] since I would
like to include them as well if possible.
Regards,
Vladimir
[1] http://community.jboss.org/wiki/InfinispanDistributedExecutionFramework
[2] https://github.com/vblagoje/infinispan/tree/t_ispn-39_master_prop1
[3] https://github.com/vblagoje/infinispan/tree/t_ispn-39_master_prop2
[4] https://github.com/vblagoje/infinispan/tree/t_ispn-39_master_prop3
13 years, 11 months
Re: [infinispan-dev] Can we replace Hypersonic with...
by Manik Surtani
Ah right, I get your point. So there are 2 cases:
1) When the failing resource manager is *not* an Infinispan node: returning a list of prepared or heuristically committed Xids is trivial for Infinispan since (a) we maintain an internal list of prepared txs, and (b) we don't heuristically commit. (Mircea can confirm this)
2) When the failing resource manager is an Infinispan node (failed and restarted, for example), and the TM calls recover on this node. In this case, recover will return a null, which, correctly, will lead the TM to believe that there are no prepared or heuristically committed txs on this node - which is correct, since the node would have been reset to the last-known stable state prior to the failure.
So what we have right now is the ability to deal with case (2). Case (1) should be implemented as well to be a "good participant" in a distributed transaction.
Adding infinispan-dev in cc, as this would be of interest there.
Cheers
Manik
On 16 Dec 2010, at 10:36, Mark Little wrote:
> So XAResource.recover and the XA protocol have well defined semantics for recovery. If you do a no-op and there are resources that need recovering, the transaction may still not be entirely ACID even though the transaction manager believes it to be the case. Unless you do the recovery for the nodes in the recover call and return a null list of Xids, your XAResource implementation is breaking the XA protocol.
>
> Mark.
>
>
> On 16 Dec 2010, at 10:31, Manik Surtani wrote:
>
>>
>> On 16 Dec 2010, at 10:23, Mark Little wrote:
>>
>>> So what happens when the recover method is invoked on the Infinispan XAResource implementation? I'm assuming it obeys the protocol if you're saying "we do support XA" ;-)
>>
>> Well, this is what I mean by "we don't support recover" right now. At the moment recover() is a no-op and we just log it, expecting manual intervention (node restart), but this should be automated (wipe in-memory state and rejoin the cluster).
>>
>>
>>>
>>> Mark.
>>>
>>>
>>> On 16 Dec 2010, at 10:18, Manik Surtani wrote:
>>>
>>>> Well, it hinges on how we implement recover. Recovery for Infinispan is, simply, restarting the node at fault and allow it to regain state from a neighbour. As opposed to more "traditional" impls of XA recovery, involving maintaining a tx log (fsync'd to disk). One may say then that we do support recovery, only that the tx log is maintained "in the cluster".
>>>>
>>>> On 16 Dec 2010, at 09:23, Mark Little wrote:
>>>>
>>>>> So we support a bit of XA then, i.e., not the recover operation?
>>>>>
>>>>> Mark.
>>>>>
>>>>>
>>>>> On 15 Dec 2010, at 17:29, Manik Surtani wrote:
>>>>>
>>>>>>
>>>>>> On 15 Dec 2010, at 17:24, Bill Burke wrote:
>>>>>>
>>>>>>>>
>>>>>>>> eh - you would have the same problems with Infinispan as with Hypersonic explaining users that if you want ACID database access you need
>>>>>>>> to use a real database and not a glorified hashmap ;)
>>>>>>>>
>>>>>>>
>>>>>>> sounds like a good feature request, to support XA/recovery. If you're gonna use Infinispan for a data grid, prolly a lot of people gonna want this.
>>>>>>
>>>>>> We do support XA. Not recovery though - since it is a p2p grid. ("Recovering" would simply involve the node wiping in-memory state, and re-joining the cluster since non-corrupted copies of its data exists elsewhere in the cluster).
>>>>>>
>>>>>> Cheers
>>>>>> Manik
>>>>>>
>>>>>> --
>>>>>> Manik Surtani
>>>>>> manik(a)jboss.org
>>>>>> twitter.com/maniksurtani
>>>>>>
>>>>>> Lead, Infinispan
>>>>>> http://www.infinispan.org
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> ---
>>>>> Mark Little
>>>>> mlittle(a)redhat.com
>>>>>
>>>>> JBoss, by Red Hat
>>>>> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom.
>>>>> Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Brendan Lane (Ireland).
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> Manik Surtani
>>>> manik(a)jboss.org
>>>> twitter.com/maniksurtani
>>>>
>>>> Lead, Infinispan
>>>> http://www.infinispan.org
>>>>
>>>>
>>>>
>>>>
>>>
>>> ---
>>> Mark Little
>>> mlittle(a)redhat.com
>>>
>>> JBoss, by Red Hat
>>> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom.
>>> Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Brendan Lane (Ireland).
>>>
>>>
>>>
>>>
>>
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> twitter.com/maniksurtani
>>
>> Lead, Infinispan
>> http://www.infinispan.org
>>
>>
>>
>
> ---
> Mark Little
> mlittle(a)redhat.com
>
> JBoss, by Red Hat
> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom.
> Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Brendan Lane (Ireland).
>
>
>
>
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Lead, Infinispan
http://www.infinispan.org
13 years, 11 months
Feedback from Mobicents Cluster Framework on top of Infinispan 5.0 Alpha1
by Eduardo Martins
Hi all, just completed first iteration of Mobicents Cluster Framework
2.x, which includes impl using first alpha release of Infinispan 5. We
have everything we had in jboss cache working, it's a dumb down but
higher level framework, with stuff like Fault Tolerant timers, which
is then reuse in whole Mobicents platform to provide cluster related
features. I believe we now already use a lot of stuff from Infinispan
rich feature set, so I guess it's good timing to give some feedback,
report some minor issues, and clear some doubts.
1) Marshallers
Generally I'm "just OK" with the current API, the fact it
Externalizers depends on Annotations makes impossible for a higher
level framework to allow it's clients to plug their own kind of
Externalizers, without exposing Infinispan classes. In concrete, our
framework exposes its own Externalizer model, but to use them we need
to wrap the instances of classes these handle, in classes bound to
Infinispan Externalizers, which when invoked by Infinispan then lookup
our "own" Externalizers, that is, 2 Externalizers lookups per call. If
there was no annotations required we could wrap instead our clients
Externalizers and plug them into Infinispan, this would mean a single
Externalizer lookup. By the way, since the Externalizer now uses
generics the Annotations values look bad, specially since it's
possible to introduce errors that can compile.
Another issue, in this case I consider it of minor importance due to
low level of Externalizers concept, is the lack of inheritance of
Externalizers, for instance we extend DefaultConsistentHash, instead
of extending its Externalizer I had to make an Externalizer from
scratch, and copy of DefaultConsistenHash's Externalizer code. This is
messy to manage, if the code on Infinispan change we will always have
to check if the code we copied is still valid.
2) CacheContainer vs CacheManager
The API is now deprecating the CacheManager, replacing with
CacheContainer, but some functionality is missing, for instance
CacheContainer is not a Listenable, thus no option to register
Listeners, unless unsafe casting of the instance exposed by the Cache
or a ref is available to the original CacheManager. Related matter,
IMHO a Cache listener should be able to listen to CacheManager events,
becoming global listeners.
3) Configuration stuff
Generally I think the Configuration and GlobalConfiguration could be
simplified a bit, I found myself several times looking at the impl
code to understand how to achieve some configurations. Better
documentation wouldn't hurt too, it's great to have a complete
reference, but the configuration samples are not ok, one is basically
empty, the other has all possible stuff, very unrealistic, would be
better to have reusable examples for each mode, with then
recommendations on how to improve these.
Infinispan 5 introduces a new global configuration setter to provide
an instance, with same method name as the one to provide the class
name. I believe one is enough, and to be more friendly with
Microcontainer and similar frameworks I would choose the one to set
the instance.
4) AtomicMap doubt
I read in the Infinispan blog that AtomicMap provides colocation of
all entries, is that idea outdated? If not we may need a way to turn
that off :) For instance would not that mean the Tree API does not
works well with distribution mode? I apologize in advance if I'm
missing something, but if AtomicMap defines colocation, AtomicMap is
good for the node's data map, but not for the node's childs fqns.
Shouldn't each child fqn be freely distributed, being colocated
instead with the related node cache entry and data (atomic)map? Our
impl is kind of an "hybrid" of the Tree API, allows cache entries
references (similar to childs) but no data map, and the storage of
references through AtomicMap in same way as Tree API worries me.
Please clarify.
5) Minor issues found
See these a lot, forgotten info logging?
03:39:06,603 INFO [TransactionLoggerImpl] Starting transaction logging
03:39:06,623 INFO [TransactionLoggerImpl] Stopping transaction logging
MBean registration tries twice the same MBean, the second time fails
and prints log (no harm besides that, the process continues without
failures):
03:39:06,395 INFO [ComponentsJmxRegistration] Could not register
object with name:
org.infinispan:type=Cache,name="___defaultcache(dist_sync)",manager="DefaultCacheManager",component=Cache
6) Final thoughts
Kind of feel bad to provide all these negative stuff in a single mail,
took me an hour to write it, but don't get me wrong, I really enjoy
Infinispan, it's a great improvement. I'm really excited to have it
plugged in AS7 (any plan on this?) and then migrate our platform to
this new cluster framework. I expect a big performance improvement, on
something already pretty fast, and much less memory usage, our
Infinispan impl "feels" very fine grained and optimized in all points
of view. Of course, the distribution mode is the cherry on top of the
cake, hello true scalability.
I hope to find time to contribute back more, and in better ways, like
concrete enhancements or issues with test cases as happened with jboss
cache, but right now that's the best I could. By the way, I'm using
the nick mart1ns in the infinispan freenode irc channel, feel free to
ping me there.
Regards,
-- Eduardo
..............................................
http://emmartins.blogspot.com
http://redhat.com/solutions/telco
13 years, 11 months
TreeCache needs Flag(s) to be maintained for the duration of a batch/tx
by Galder Zamarreño
Hi,
Re: https://issues.jboss.org/browse/ISPN-841
The issue here is the fact that if you call a TreeCache operation passing flags, you want this flags to apply to all cache operations encompassing the tree cache op. Now, the thing to remember about flags is that they get cleared after each cache invocation, so we must somehow pass flags around to all methods that operate on the cache as a result of a treecache.put for example.
A rudimentary way to do so would be to pass Flag... to all methods involved which is not pretty and hard to maintain. An alternative would be to have some flags thread local that gets populated on start of tree cache operation and gets cleared in the end of the operation. Although this might work, isn't this very similar to what CacheDelegate does to maintain flags except that instead of keeping them for a cache invocation, it would keep them hanging around until the end of the operation? TreeCache operations are bounded by start/stop atomic calls that are essentially calls to start/stop batches. So, it seems to me that what this is asking for is for a wider functionality to keep flags for the duration of a transaction/batch, which would most likely be solved better in core/
Thoughts?
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
13 years, 11 months
Fwd: [Jdbm-developer] JDBM2 stable release
by Elias Ross
Now that I left my old company, I might be able to work on integrating this.
But is there still any interest in JDBM? Does it actually get used?
---------- Forwarded message ----------
From: Jan Kotek <opencoeli(a)gmail.com>
Date: Thu, Dec 23, 2010 at 2:18 PM
Subject: [Jdbm-developer] JDBM2 stable release
To: jdbm-developer(a)lists.sourceforge.net, jdbm-general(a)lists.sourceforge.net
Hi,
I am proud to announce stable release of JDBM2. It is fork of 1.0,
which integrates most of patches developed in here.
It is faster and more space efficient than older release.
home page:
http://code.google.com/p/jdbm2/
announcement:
http://www.kotek.net/blog/jdbm2_released
Regards,
Jan Kotek
13 years, 11 months
Marshaller implementation discovery
by David M. Lloyd
Sanne Grinovero asked me to drop a quick note about the API used to
discover JBoss Marshalling implementations. Since 1.2.0.GA, you can use
the org.jboss.marshalling.Marshalling class methods to locate protocol
implementations without involving a hard dependency in your sources.
I've heard that Infinispan uses this pattern to load the implementation
class:
(MarshallerFactory)
Thread.currentThread().getContextClassLoader().loadClass("org.jboss.marshalling.river.RiverMarshallerFactory").newInstance();
This is a bit kludgey though and is considerably more complex than just
doing:
Marshaller.getMarshallerFactory("river");
which uses the java.util.ServiceLoader API to locate and instantiate the
appropriate implementation class, also using the TCCL, and should be
functionally equivalent (yet quite a bit cleaner) to the former.
--
- DML
13 years, 11 months
Re: [infinispan-dev] Distributed tasks - specifying task input
by Manik Surtani
On 17 Dec 2010, at 11:41, Vladimir Blagojevic wrote:
> Even better this way. I would like to hear more about your reasoning behind using DT on per-cache basis. Yes, it would be simpler and easier API for the users but we do not cover uses cases when distributed task execution needs access to more than one cache during its execution....
I was just wondering whether such a use case exists or whether we're just inventing stuff. :-) It would lead to a much more cumbersome API since you'd need to provide a map of cache names to keys, etc.
>
> On 10-12-16 9:07 AM, Manik Surtani wrote:
>>
>>
>> Hmm. Maybe it is better to not involve an API on the CacheManager at all. Following JSR166y [1], we could do:
>>
>> DistributedForkJoinPool p = DisributedForkJoinPool.newPool(cache); // I still think it should be on a per-cache basis
>>
>> DistributedTask<MyResultType, K, V> dt = new DistributedTask<MyResultType, K, V>() {
>>
>> public void map(Map.Entry<K, V> entry, Map<K, V> context) {
>> // select the entries you are interested in. Transform if needed and store in context
>> }
>>
>> public MyResultType reduce(Map<Address, Map<K, V>> contexts) {
>> // aggregate from context and return value.
>> };
>>
>> };
>>
>> MyResultType result = p.invoke(dt, key1, key2, key3); // keys are optional.
>>
>> What I see happening is:
>>
>> * dt is broadcast to all nodes that hold either of {key1, key2, key3}. If keys are not provided, broadcast to all.
>> * dt.map() is called on each node, for each key specified (if it exists on the local node).
>> * Contexts are sent back to the calling node and are passed to dt.reduce()
>> * Result of dt.reduce() passed to the caller of p.invoke()
>>
>> What do you think?
>>
>>
>> [1] http://gee.cs.oswego.edu/dl/jsr166/dist/jsr166ydocs/index.html
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> twitter.com/maniksurtani
>>
>> Lead, Infinispan
>> http://www.infinispan.org
>>
>>
>>
>
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Lead, Infinispan
http://www.infinispan.org
14 years