<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'>
Hey all,<BR>
<BR>
I am trying to get Infinispan 4.2.0.FINAL up and running on a large (think 1000) nodes for a project at work. I'm running into some issues and have been scouring the JIRA issues and forums. I have gotten almost zero responses on the forums. The Infinispan configuration is:<BR>
<BR>
locking: isolation_level: READ_COMMITTED, lockAcquisitionTimout=50000, writeSkewCheck=false, concurrencyLevel=512, useLockStriping=false<BR>
transaction: syncRollbackPhase=false, syncCommitPhase=false, useEagerLocking=false<BR>
lazyDeserialization: enabled = false<BR>
invocationBatching: enabled = true<BR>
eviction: wakeUpInterval=1000 maxEntries=-1 strategy=FIFO<BR>
clustering: distribution<BR>
sync<BR>
hash numOwners = 2, rehashRpcTimeout=600000<BR>
l1 enabled=true lifespan=600000<BR>
<BR>
<BR>
I think this has more to do with JGroups than Infinispan. Initially we were seeing lots of little clusters form, increasing the number of initial members in the PING section of the jgroups-udp.xml file and increasing the timeouts seems to have made things better but not great. Even at 20 nodes, we are seeing messages where the coordinator is failing to flush. Does anyone have any experience running Infinispan at a large scale (100+ nodes) that may be able to shed some light on the items that need to be changed in the configuration to run at this scale? It does not appear that the default configuration scales to a large size. Any help would be appreciated as my colleagues are starting to question my choice of cache implementation.<BR>
<BR>
Dave Marion<BR><BR>                                            </body>
</html>