[infinispan-dev] X-S replication configuration

Mircea Markus mircea.markus at jboss.com
Thu Jul 12 08:47:28 EDT 2012


On 10 Jul 2012, at 18:28, Galder Zamarreño wrote:
> Firstly, we should strive to be consistent with how our XML configuration works, so IMO, boo-moo should be booMoo (I'm talking about backup-strategy here).
+1

> 
> Now, I'm confused with the global settings. What backupStrategy should define is how to communicate with the other site, i.e. if you wanna use a different bind address, different transport protocol…etc. This is the same that we do with the global transport settings. We don't define whether backup (or in clustering, replication or invalidation..etc, is sync or async) is sync or async at this level. This configuration, sync/async, belongs to the cache level IMO.
gotcha 
Indeed that's why we have the default cache for: all the other caches can inherit the defaults from there. 
> 
> Does it make sense to have/define a site without the transport that you're gonna use to configure with it? You could potentially have N different networks to connect to N sites. If you assume the default, which default is it?
The actual transport is configured at jgroups level in RELAY2 and the matching between ISPN and jgroups is made based on the name of the site.

> 
> Line 31, what about this config instead?
> 
> 	<syncBackup name="NYC"/>
> 
> You could still have <backup> for when you want to use default mode as per the cache. So, you'd have: backup, syncBackup, asyncBackup.
That feels a bit like mixing the attribute with the element, but that's precisely what we do with the <async> and <sync> under the <clustering> tag.
I guess it's more a matter of taste, but I rather stick with simple backup  - any other oppinions? 

> 
> And line 25: <backups> instead of <sites>

Thanks for the excellent feedback! I've updated the document and added some more examples:  https://gist.github.com/3059621
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20120712/efecffb1/attachment.html 


More information about the infinispan-dev mailing list