I am not sure I understand.
I assume that caches somehow have a unique identifier to recognize themselves in a cluster right?
So you have a CacheManager created on each node and by this "unique identifier", you can add a cache node to a given grid.

Infinitians, more infos?

On  Jul 15, 2009, at 11:05, Łukasz Moreń wrote:

To have access to this same Infinispan cache on all nodes (master and slaves) I have to create it from this same, single CacheManager.
So there is difficulty how to distribute CacheManager to all nodes - something like singleton in a cluster.
Is there some recommended option how to achieve that in our case?

Lukasz

2009/7/14 Emmanuel Bernard <emmanuel@hibernate.org>

On  Jul 13, 2009, at 23:59, Manik Surtani wrote:


On 13 Jul 2009, at 17:10, Łukasz Moreń wrote:

1. share the same grid cache between the master and the slaves

Infinispan has a flat structure. The key has to contain:
 - the index name
 - the chunk name 
Both with essentially be the unique identifier. 

I suppose in this idea all indexes are stored in a one single grid. What about one Infinispan grid per directory, similarly to RAMDirectory or FSDirectory? IMHO it could make some simplifications i.e. in metadata or key names. 
Are there any Infinispan drawbacks to have a high number of caches in the network? Sharing JGroups channels can help in that?

They already share JGroups channels and other "heavy" components wherever possible.  Its just that configuration becomes more of a pain, etc.  

When you say one cache per index, how do you define an index?  Does 1 index mean all indexed data for a single java type?  In which case couldn't these scale up dynamically and potentially on-demand?  No wait - these are fixed in Hibernate Search on startup, correct?

Right for now they are fixed at startup time.
I'm unclear what is easier really. One cache or multiple caches. Multiple configurations (if seen by the user) is a PITA on the other hand could provide some flexibility (ie one cache behavior != than another) but that's rarely needed very likely.