Configuration visitor - Re: [JBoss JIRA] Commented: (ISPN-145) No transport and singleton store enabled should not be allowed
by Vladimir Blagojevic
Hi,
Galder and I talked about this offline. Time to involve you guys!
I just completed visitor pattern for our configuration objects. Visitor
is passed from root of configuration - InfinispanConfiguration object.
InfinispanConfiguration class has a new method:
public void accept(ConfigurationBeanVisitor v)
How do we want to integrate this visitor into existing structure?
1) We add a new factory method to InfinispanConfiguration with
additional ConfigurationBeanVisitor parameter
2) We leave everything as is and if there is a need to pass some visitor
we pass it to InfinispanConfiguration instance directly (from
DefaultCacheManager)
DefaultCacheManager will pass ValidationVisitor to
InfinispanConfiguration that will verify configuration semantically.
Regards,
Vladimir
On 09-09-09 10:19 AM, Galder Zamarreno wrote:
> Good idea :)
>
> On 09/09/2009 04:13 PM, Vladimir Blagojevic wrote:
>> Yeah,
>>
>> I was thinking that we can make a visitor for configuration tree and
>> then you can do verification of any node and other things as well. Use
>> cases will come up in the future for sure.
>>
>> Cheers
>>
>>
>>
>> On 09-09-09 3:29 AM, Galder Zamarreno (JIRA) wrote:
>>> [
>>> https://jira.jboss.org/jira/browse/ISPN-145?page=com.atlassian.jira.plugi...
>>>
>>> ]
>>>
>>> Galder Zamarreno commented on ISPN-145:
>>> ---------------------------------------
>>>
>>> Not sure I understand what you mean by generic though. You mean any
>>> component to have a validation step of some sort?
>>>
>>> Thanks for taking this on :)
>>>
>>>> No transport and singleton store enabled should not be allowed
>>>> --------------------------------------------------------------
>>>>
>>>> Key: ISPN-145
>>>> URL: https://jira.jboss.org/jira/browse/ISPN-145
>>>> Project: Infinispan
>>>> Issue Type: Bug
>>>> Components: Loaders and Stores
>>>> Affects Versions: 4.0.0.ALPHA6
>>>> Reporter: Galder Zamarreno
>>>> Assignee: Vladimir Blagojevic
>>>> Priority: Minor
>>>> Fix For: 4.0.0.CR1
>>>>
>>>>
>>>> Throw configuration exception if singleton store configured without
>>>> transport having been configured.
>>>> It makes no sense to have singleton store enabled when there's no
>>>> transport.
>>
>
13 years, 5 months
Using Coverity scan?
by Sanne Grinovero
Hello,
Did you consider enabling Infinispan to be monitored by coverity's
code analysis services? They are free for OSS projects, I saw a demo
recently and was quite amazed. It's similar to FindBugs, but not only
about static code checks. They checkout your code from trunk and then
run several analysis on it periodically, one of them is about dynamic
thread behavior to predict deadlocks or missing fences instrumenting
the code, and produce nice public reports; AFAIK you don't need to
setup anything yourself, besides getting in touch to ask for it.
It's only available for C and Java code, and they have an impressive
list of OSS projects in the C world (linux kernel, httpd server,
samba, gnome, GCC, PostgreSQL, ...) but not much on Java.
http://scan.coverity.com/
No, I'm not affiliated :-) Just thinking that it might be useful to
have if it's not too hard to setup.
Cheers,
Sanne
14 years, 2 months
Wicket & Jetty Clustering
by Philippe Van Dyck
Hi all,
I have written a couple of infinispan clustering helpers for Jetty (a
session manager) and Wicket (a page store).
If you want it, where am I supposed to commit them ? (As usual, it is crude
undocumented code )
Also, any idea about the schedule of the next alpha release ?
Cheers,
phil
14 years, 9 months
ISPN-359 and grouping entries for distribution
by Manik Surtani
Re: subject (see https://jira.jboss.org/jira/browse/ISPN-359), there are a couple of approaches that could be taken:
1. Don't use key.hashcode() as the seed in determining to which nodes an entry is mapped, but instead on a well-known method or annotated method (e.g., int getGroupID() or a method annotated with @GroupId). The way I see it, this approach has:
+ Will work, no additional overheads of AtomicMaps
- Cost (reflection)
- Intrusive (what if users have no control over the key class, e.g., String keys?)
2. Additional API methods on the cache - cache.put(K, V, G), cache.putAll(Map, G), etc.
+ Non-intrusive
- Overhead of AtomicMaps + additional entries for mappings
+ or - (depending on how you look at it) all keys in the group will be locked together, etc, a side-effect of using AtomicMaps
My pref is for approach #2. In terms of implementation, here is what I have in mind:
* A GroupingInterceptor that intercepts the call early on if the call is a put(K, V, G) or something similar.
* Breaks up the call to a put(K, G) and a getAtomicMap(G).put(K, V). Wrapped in a tx to ensure atomicity.
* get(K), etc intercepted as well, replaced with getAtomicMap(get(K)).get(K)
* remove(K), etc intercepted with getAtomicMap(get(K)).remove(K)
One of the issues with the API approach is that it heavily pollutes the Cache API. It will double the number of put() methods on Cache (currently 18 variants of put, including ones that take in lifespans and maxIdles, async versions that return futures, etc.) Perhaps this could be in an additional sub-interface interface? GroupedCache? Or is this degree of method overloading not too confusing?
Cheers,
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years, 9 months
Server location hints in Infinispan
by Manik Surtani
This relates to https://jira.jboss.org/jira/browse/ISPN-180.
In JBoss Cache, we had a provision to allow for pluggable buddy selection algorithms. By default, the buddy selection process would first try and pick a buddy in the same buddy group, failing which any buddy *not* on the same physical machine, failing which any buddy not in the same JVM, and finally any buddy at all. Further, being pluggable, people could write their own buddy selection algorithms to pick buddies based on any additional metrics, such as machine performance by hooking into monitoring tools, etc.
In Infinispan we do not have an equivalent as yet. The consistent hash approach to distribution takes a hash of each server's address and uses this to place the server on a consistent hash wheel. Owners for keys are picked based on consecutive places on the wheel. So there is every possibility that nodes on the same physical host or rack are selected to back each other up, which is not optimal for data durability.
One approach is for each node to provide additional hints as to where it is - hints including "machine id", "rack id" and maybe even "site id". The hash function that calculates an addresses position on the hash wheel would take these 3 metrics into account, so this should be robust and pretty efficient. The only drawback with this approach is that for each address, this additional data needs to be globally available since CH's need to work globally and deterministically. This information could be a part of a DIST JOIN request, which would work well.
What do people think? Any interesting alternate approaches to this problem?
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years, 10 months
Infinispan within JBossAS
by Dimitris Andreadis
Hi guys,
I'll be doing a short talk about JBoss 6 & Infinispan at a local conference:
"This presentation introduces Infinispan, the new underlying data caching and replication
infrastructure included with the upcoming JBoss AS 6. Infinispan is an Open Source library
that can be used independently of JBoss AS to let you build dynamic and highly available
data grids that scale to the order of thousands, while offering a large number of enterprise
features."
I need some help to identify the key benefits of using Infinispan in the context of AS.
- Why is it going to be better from JBoss Cache?
- What are the major use cases we are addressing?
The obvious ones are
- smaller memory footprint
- faster
But I need more, in terms of how it will affect
- session/sfsb replication
- jpa/entity caching invalidation, 2nd level caching
- other?
Beyond the "standard" usage of Infinispan replacing JBossCache in AS, I need usecases of
datagrids used in the context of AS deployments. Is it going to be used primarily as a
read-mostly cache? Do you have some examples?
I'm trying to imagine applications that would really benefit by the JBossAS/Infinispan
combination when deployed in the cloud in really large numbers (e.g. hundred of AS
instances). Would a special architecture design would need to be used in this case?
Finally, do you have some standalone Infinispan usage examples?
I think it will be great if we can associate/match Infinispan (within AS) to real people
problems and usecases.
Thanks for the help!
/Dimitris
14 years, 10 months
Fwd: infinispan + mc + vfs
by Ales Justin
Pushing to dev ml ...
Begin forwarded message:
> From: Manik Surtani <msurtani(a)redhat.com>
> Date: April 28, 2010 4:44:11 PM GMT+02:00
> To: Ales Justin <ales.justin(a)gmail.com>
> Cc: Bela Ban <bban(a)redhat.com>
> Subject: Re: infinispan + mc + vfs
>
> All sounds very good. We should discuss this on infinispan-dev BTW...
>
> On 28 Apr 2010, at 15:14, Ales Justin wrote:
>
>>> It looks pretty good. Perhaps you should create a wiki page about this on Infinispan's wiki - I'm sure others will be interested.
>>
>> I still need to play around a bit.
>> Perhaps wiki really is the best way to push this fwd, along with mentioning this on the-core and/or our weekly AS confcall.
>>
>>> Also, what else would we need to create an infinispan-mc module? Essentially this would be an adapter that would allow Infinispan lifecycle and config parsing to hook in to the MC, so that we could build a JBoss AS "data grid" profile such that:
>>>
>>> ${JBOSS_HOME}/servers/datagrid/lib/<infinispan jars>
>>> ${JBOSS_HOME}/servers/datagrid/conf/infinispan.xml
>>>
>>> running:
>>>
>>> $ run.sh -c datagrid
>>>
>>> would start a standalone Infinispan node with:
>>>
>>> * MC
>>> * JNDI
>>> * JMX
>>> * Infinispan (based on the config in infinispan.xml)
>>
>> With my prototype you can already do all of this. ;-)
>>
>> All you're missing is infinispan.deployers dir/module in AS' deployers directory.
>> And then you would simply move conf/infinispan.xml into datagrid/deploy/ dir.
>> Voila! ;-)
>>
>> I can setup this once you/we get confirmation from the AS team that this is the right approach.
>>
>>> Some configs may also include HotRod/Memcached/WebSock/REST server - the latter which would then need a web container. Same goes for servers/datagrid-managed which would also contain a JOPR instance. Perhaps then what we are looking for is:
>>>
>>> servers/datagrid // default, p2p comms
>>> servers/datagrid-server // + HotRod/Memcached/WebSock endpoints
>>> servers/datagrid-REST // + REST endpoint
>>> servers/datagrid-managed // + JOPR instance
>>>
>>> Thoughts?
>>
>> This looks too much fuss for simple config diff.
>> We can ask around on what the new ProfileService (PS) can help us here,
>> to make this a single config + some PS magic.
>>
>> -Ales
>>
On 24 Apr 2010, at 21:48, Ales Justin wrote:
> I've hacked this initial MC + VFS support:
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
>
> You can then drop in this jar:
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
>
> Where we watch for "jboss-infinispan.xml" files:
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
>
> And to test all of this I deploy a plain MC bean:
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
> * http://anonsvn.jboss.org/repos/jbossas/projects/demos/microcontainer/trun...
>
> This also exposes how we can transparently use GridFilesystem over VFS.
>
> So, any feedback is welcome.
>
> -Ales
>
14 years, 10 months
infinispan at linux can't get correct ip
by FangYuan
HI guys:
Now I'm using jgroups with infinispan at linux host. Now jgroups always
use 127.0.0.1 as bind_addr when using DHCP . It seems like infinispan
doesn't change some configuration . While running jgroups independently
doesn't have such problem. I have traced starting up of jgroups , I find
something wrong in getAddresses function of Configurator class. This
function always get 127.0.0.1 as localhost address . I don't know the
reason .
The information contained in this message is legally privileged and confidential, and is intended for the individual or entity to whom it is addressed (or their designee). If this message is read by anyone other than the intended recipient, please be advised that distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please notify the sender immediately and delete or destroy all copies of this message.
14 years, 10 months
ISPN-384 - Implementing hash distribution aware headers in Hot Rod
by galder@redhat.com
Hi,
Re: https://jira.jboss.org/jira/browse/ISPN-384
The topology headers are in, so the next step is to get the hash distribution headers in. The first step here is for Hot Rod servers to be able to query an Address' position in the wheel so that this is sent back to clients (This is what I refer to by hashcode in http://community.jboss.org/wiki/HotRodProtocol#HashDistributionAware_Clie... - it really is the hash wheel position). However, there's no such API at the moment that allows clients to query it. Having had a look, the most reasonable thing would be to add something like this to the ConsistentHash interface:
int getPosition(Address a)
However, this might be somehow limiting if we end up implementing virtual nodes. ExperimentalDefaultConsistentHash hints at the possibility of that happening and so something like this might more future proof:
List[Integer] getPositions(Address a)
This also highlights the limitation of the Hot Rod spec where it's assumed that a server has a single position.
Moreover, these brings up another interesting topic which is the order in which Hot Rod orders the servers in the headers. For topology headers, although not written down, I'm following the same kind of pattern used at the JGroups level where serves started first appear in first in the list returned. I should probably add this to the protocol wiki.
In the case of hash distribution headers, I think it'd make sense for the order to be based on the hash wheel position in ascendant order. That way it would make life easier for clients to find the target node for the operation, since it'd avoid them having to do the sorting and finding out the next node. If we take this into account with the fact that a node might map to multiple positions, I think the hash distribution header might be look better this way:
[Response header][Topology Id][Num Key Owners][Hash Function Version][Hash space size][Num servers in topology]
-> New:
[*m1: Server Id*][m1: Host/IP length][m1: Host/IP address]
[*m2: Server Id*][m2: Host/IP length][m2: Host/IP address]...
[*m3: Server Id*][m3: Host/IP length][m3: Host/IP address]...
[Num total positions]
[*m2: Server Id*][m2: hash wheel position 1]
[*m3: Server Id*][m3: hash wheel position 2]
[*m2: Server Id*][m2: hash wheel position 2]
[*m3: Server Id*][m3: hash wheel position 1]
[*m3: Server Id*][m3: hash wheel position 3]
[*m1: Server Id*][m1: hash wheel position 1]
So, above I've splitted the server host/port definitions in one side and then the hash wheel positions. I did this to avoid repeating the host/port definition which each hash wheel definition. I've also added a number of total positions and the list following it is ordered in ascendant order of position. The server id would be a vInt.
WDYT?
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 10 months