OK, then someone from the Infinispan team needs to help you.
If the option to use write-behind (instead of write-through) for a cache
loader still exists, that might be a perf boost.
The basic issue with the S3 cache loader is that it needs to send slow
and bulky SOAP messages to S3, and that's always slow. I don't know the
current S3 cache loader impl, but I suggest take a look and see what
properties they support. E.g. they might have an option to batch updates
and write them to S3 in collected form.
philippe van dyck wrote:
Thanks for your help Bela.
Indeed, when I replace the S3 cache store with the disk one, the performance problem
disappears (takes less than a second to store my 100 'put' updates when I commit
the transaction).
Here is my config :
<?xml version="1.0" encoding="UTF-8"?>
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:infinispan:config:4.0">
<global>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<properties>
<property name="configurationFile" value="jgroups.xml" />
</properties>
</transport>
</global>
<default>
<transaction
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
/>
<clustering mode="distribution">
<l1 enabled="true" lifespan="100000" />
<hash numOwners="2" rehashRpcTimeout="120000" />
</clustering>
<loaders passivation="false" shared="true"
preload="false">
<loader class="org.infinispan.loaders.s3.S3CacheStore"
fetchPersistentState="false" ignoreModifications="false"
purgeOnStartup="false">
<properties>
<property name="awsAccessKey" value="xxx" />
<property name="awsSecretKey" value="xxx" />
<property name="bucketPrefix" value="store" />
</properties>
<async enabled="true"/>
</loader>
</loaders>
<unsafe unreliableReturnValues="true" />
</default>
</infinispan>
And the log :
INFO (14:54:37): JGroupsTransport - Starting JGroups Channel
INFO (14:54:38): JChannel - JGroups version: 2.8.0.CR5
-------------------------------------------------------------------
GMS: address=sakapuss.local-16157, cluster=Infinispan-Cluster, physical
address=192.168.1.136:7800
-------------------------------------------------------------------
INFO (14:54:49): JGroupsTransport - Received new cluster view:
[sakapuss.local-16157|0] [sakapuss.local-16157]
INFO (14:54:49): JGroupsTransport - Cache local address is
sakapuss.local-16157, physical address is 192.168.1.136:7800
INFO (14:54:49): GlobalComponentRegistry - Infinispan version: Infinispan
'Starobrno' 4.0.0.CR2
INFO (14:54:49): AsyncStore - Async cache loader starting
org.infinispan.loaders.decorators.AsyncStore@7254d7ac
WARN (14:54:51): utureCommandConnectionPool -
org.jclouds.http.httpnio.pool.HttpNioFutureCommandConnectionPool@fcd4eca1 - saturated
connection pool
INFO (14:54:52): ComponentRegistry - Infinispan version: Infinispan
'Starobrno' 4.0.0.CR2
please note the HttpNioFutureCommandConnectionPool@fcd4eca1 - saturated connection pool
(??)
Philippe
Le 2 déc. 2009 à 14:28, Bela Ban a écrit :
> Just to narrow down the issue: when you disable the S3 cache store, I
> assume the performance problem goes away, correct ?
>
> Just trying to pin the blame on the S3 cache loader, then I don't even
> need to see whether it is a JGroups problem... :-)
>
>
>
> philippe van dyck wrote:
>
>> Hi Infinispan mailing list,
>>
>> a couple of days ago, I succeeded in writing an entity store for qi4j
(
http://www.qi4j.org/) using Infinispan, the S3 store and the S3_PING JGroups clustering
configuration.
>>
>> It works like a charm, discovers new EC2 instances, synchronizes and process
transactions perfectly... you did an amazing job.
>>
>> But I have a serious performance problems.
>>
>> When I write an update (<1k) to the cache, it takes around 500 ms to be stored
on S3.
>>
>> The best result I achieved was around 10 cache writes per second... it is abysmal
(when using httpclient directly I had a min of 100/sec using 20 connections).
>>
>> When I commit a JTA transaction made of 100 cache writes, it takes around 30
seconds (cpu<5%) and the first write ends on S3 after at least 5 seconds of
'idle' time (SSL negotiation??).
>>
>> I disabled the store asynchronism and work without JTA transactions, no effect on
performance.
>>
>> I also modified the jClouds configuration, multiplying by 10 all worker threads,
connections and the rest... no improvement!
>>
>> When I (load) test my web app (wicket based+qi4j+...infinispan) the cpu stays
idle (<5%) and ... JTA transactions fails (timeouts) and I cannot acquire locks before
the 10 seconds timeout.
>>
>> Is there something fishy in the jclouds configuration ? in the httpnio use of
jclouds ? in the version of jclouds (the trunk one with the blob store seems to be so
different) ?
>>
>> Am I missing something ?
>>
>> Any pointer to any doc/help/experience is welcome ;-)
>>
>> Philippe
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
> --
> Bela Ban
> Lead JGroups / Clustering Team
> JBoss
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
------------------------------------------------------------------------
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev