Bela,
 
  Agreed, increasing the timeouts is probably not the way to go. However I am at the mercy of whatever Infinispan is doing. As for the notion of Infinispan being tested at 1000 nodes:
 
  Manik was quoted here: http://itmanagement.earthweb.com/netsys/article.php/3864436/Red-Hat-Ramps-Up-Open-Source-Cloud-Projects.htm
  It was never denied here: http://community.jboss.org/thread/156494?tstart=0
  and I'll have to wait until I get home to dig up an email that I think I have.
 
However, I'm not assigning blame, I was under the impression that it had been tested at 1000 nodes. I just had it up at 430 nodes, but I can't reliably get it to that size when I restart it. I think the answer is that Infinispan will have to use what will scale in JGroups (remove what does not scale). Until then I will have to scale it down to a size that I can start in a reliable fashion.
 
  Still no answer on whether ISPN-83 will be in 4.2.1.....
 
 
 
Dave Marion
 
> Date: Fri, 18 Mar 2011 16:30:46 +0100
> From: bban@redhat.com
> To: infinispan-dev@lists.jboss.org
> Subject: Re: [infinispan-dev] Infinispan Large Scale support
>
>
>
> On 3/18/11 2:19 PM, david marion wrote:
> >
> > Bela, Manik,
> >
> > Thanks for responding. Will ISPN-83 be included in 4.2.1.FINAL? Yes, a large cluster jgroups config would be great. At this point we have taken the UDP config distributed with 4.2.0 and increased all the timeouts.
>
> That's certainly not the way to do it ! If you post your config over on
> the JGroups mailing list [1], I'll take a look and suggest
> modifications. If we work on this a bit to get your cluster going, this
> config could serve as the basis of a large cluster sample config,
> shipped with JGroups and posted on a wiki.
>
>
> > It took about 30 minutes to get to 150 nodes. The cluster appears stable once its up, the problem is in the startup. Based on what I am seeing and have read in the
> >documentation, every time a node wants to join a FLUSH is sent across
> the entire system and then a new view is created.
>
>
> FLUSH can not be part of a large cluster configuration; virtual
> synchrony was never meant to scale to more than 20 or so nodes !
>
>
> > We are seeing nodes wait minutes to get the new view. It has been mentioned before that Infinispan was tested at 1000 nodes
>
>
> Where was this mentioned ? I personally have never see such a large
> cluster... The largest cluster I know of is ca 400 nodes...
>
>
>
> [1] https://lists.sourceforge.net/mailman/listinfo/javagroups-users
>
> --
> Bela Ban
> Lead JGroups / Clustering Team
> JBoss
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev