[infinispan-dev] Infinispan Large Scale support

Bela Ban bban at redhat.com
Fri Mar 18 03:12:26 EDT 2011



On 3/17/11 8:13 PM, david marion wrote:
>
> Hey all,
>
>    I am trying to get Infinispan 4.2.0.FINAL up and running on a large (think 1000) nodes for a project at work. I'm running into some issues and have been scouring the JIRA issues and forums. I have gotten almost zero responses on the forums.


> I think this has more to do with JGroups than Infinispan.

You should post this to the JGroups users mailing list then. I've not 
seen a cluster this large yet... !


> Initially we were seeing lots of little clusters form, increasing the number of initial members in the PING section of the jgroups-udp.xml file and increasing the
>timeouts seems to have made things better but not great. Even at 20 
nodes, we are seeing messages where the coordinator is failing to flush.

I do *not* recommend FLUSH in clusters bigger than 10-15 nodes ! FLUSH 
will definitely kill you if you have clusters of 100s of nodes.

IIRC, Infinispan requires FLUSH, but I think this was removed in 
4.2.1.FINAL. Maybe Manik or Vladimir can comment ?


> Does anyone have any experience running Infinispan at a large scale (100+ nodes) that may be able to shed some light on the items that need to be changed in the
>configuration to run at this scale?


For large clusters, let's continue the discussion on the JGroups mailing 
list. Quite a few people would be interested (there's another parallel 
discussion going on currently on this subject)...


> It does not appear that the default configuration scales to a large size.


Correct. The default config is geared toward a 4-16 node cluster.

I'm collecting best practices in [1]. Don't know which version you use, 
but there were some changes in recent releases to better accommodate 
large clusters, e.g. be reducing traffic. Plus, some changes to the 
configuration will help in running larger cluster

[1] https://issues.jboss.org/browse/JGRP-100


-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss


More information about the infinispan-dev mailing list