> 2. Using xml config, no storeLockSafe is called at all?
That's because you have singleton store enabled but the cache is not
configured with correctly to form a cluster. Either: set transport class
correctly like this:
<transport clusterName="SmppGatewayCluster"
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"/>
or disable singleton store:
<singletonStore enabled="false"
pushStateWhenCoordinator="true" pushStateTimeout="20000"/>
Note that enabling singleton store only makes sense when there's a
cluster. I'll add a JIRA so that even if there isn't a cluster, having
singleton store enabled does not stop you from storing stuff to cache store.
I'll check this asap.
> Thanks for your help,
> Amin Abbaspour
>
> On Mon, Jul 20, 2009 at 7:09 PM, Galder
> Zamarreno<galder.zamarreno(a)redhat.com> wrote:
>> On 07/20/2009 02:39 PM, Manik Surtani wrote:
>>> On 20 Jul 2009, at 12:35, Amin Abbaspour wrote:
>>>
>>>> Hi,
>>>>
>>>> Thanks Galder. I will test CoalescedAsyncStore ASAP and let you know.
>>>>
>>>> BTW Manik, I think the change is so much that we'd better see this
as
>>>> a completely new store and later replace AsyncCacheStore or keep them
>>>> both.
>>> I agree that the impl is significantly different, but from a
>>> functional/feature standpoint, what does it offer over and above the
>>> AsyncCacheStore, except that it just does the same thing "better" ?
:-)
>> I agree with Manik. I would have only left the other implementation if there
>> was anything in particular the past implementation offered versus the
>> coalesced one. My perf tests indicated that we're as fast as before but with
>> the added bonus of lesser cache store calls, so that looks to me a win/win
>> situation.
>>
>> I've now committed
https://jira.jboss.org/jira/browse/ISPN-116 and added an
>> FAQ entry to
http://www.jboss.org/community/wiki/InfinispanTechnicalFAQs
>>
>> I'll talk about this further on a blog entry later this week.
>>
>> FAO Amin, if you wanna try it out, checkout latest from trunk and let us
>> know what you think. No need for configuration changes, simply add<async
>> enabled="true" /> and that should be enough.
>>
>>>> Regards,
>>>> Amin Abbaspour
>>>>
>>>> On Mon, Jul 20, 2009 at 2:47 PM, Manik Surtani<manik(a)jboss.org>
wrote:
>>>>> Nope, never got the email below.
>>>>>
>>>>> Anyway, I always thought the intention was to enhance or improve the
>>>>> AsyncCacheStore, not to create another one.
>>>>>
>>>>> Cheers
>>>>> Manik
>>>>>
>>>>> On 20 Jul 2009, at 10:55, Galder Zamarreno wrote:
>>>>>
>>>>>> Did other list readers ever got this message?
>>>>>>
>>>>>> I did send it last week but never actually got it via the dev
>>>>>> list, so
>>>>>> double checking...
>>>>>>
>>>>>> On 07/15/2009 07:58 PM, Galder Zamarreno wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> As part of this JIRA, I'm thinking whether we wanna leave
the
>>>>>>> current
>>>>>>> queue based async store.
>>>>>>>
>>>>>>> For the work I've done, I've created a new class
called
>>>>>>> CoalescedAsyncStore that works in the way agreed below, iow,
>>>>>>> instead of
>>>>>>> queying mods, it keeps a ConcurrentMap and which is swapped
by a
>>>>>>> new
>>>>>>> instance when the async thread is going to apply the
changes.
>>>>>>>
>>>>>>> I've run some perf tests between CoalescedAsyncStore and
AsyncStore
>>>>>>> and
>>>>>>> when doing 1 million store where each key is different, the
>>>>>>> performance
>>>>>>> is fairly similar. The difference comes when trying to run a
>>>>>>> similar
>>>>>>> test where the same key is always updated, this results in
lesser
>>>>>>> underlying store calls and hence CoalescedAsyncStore solution
is
>>>>>>> faster
>>>>>>> here.
>>>>>>>
>>>>>>> So, rather than keeping both, I'd suggest replacing
AsyncStore with
>>>>>>> the
>>>>>>> impl in CoalescedAsyncStore. We also need to look at the
>>>>>>> configuration
>>>>>>> for the new AsyncStore:
>>>>>>>
>>>>>>> Configuration wise in AsyncStoreConfig, I'd leave
threadPoolSize,
>>>>>>> enabled and add the read/write lock acquisition timeout. The
rest
>>>>>>> I'd
>>>>>>> get rid as no longer apply:
>>>>>>>
>>>>>>> int batchSize = 100;
>>>>>>> long pollWait = 100;
>>>>>>> int queueSize = 10000;
>>>>>>>
>>>>>>> By default, I'd give 5000ms to the read/write lock
acquisition
>>>>>>> timeout.
>>>>>>>
>>>>>>> Thoughts?
>>>>>>>
>>>>>>> On 07/09/2009 01:08 PM, Manik Surtani wrote:
>>>>>>>> On 9 Jul 2009, at 11:37, Galder Zamarreno wrote:
>>>>>>>>
>>>>>>>>> On 07/09/2009 11:55 AM, Manik Surtani wrote:
>>>>>>>>>> On 8 Jul 2009, at 19:53, Jason T. Greene wrote:
>>>>>>>>>>
>>>>>>>>>>> Manik Surtani wrote:
>>>>>>>>>>>> * Make the concurrent map volatile
>>>>>>>>>>>> * When iterating, first create a new
ConcurrentMap and replace
>>>>>>>>>>>> the
>>>>>>>>>>>> old one with the new one so all
concurrent threads write to
>>>>>>>>>>>> the new
>>>>>>>>>>>> Map
>>>>>>>>>>>> * Iterate over the old map
>>>>>>>>>>> That would lead to race conditions since a
concurrent writing
>>>>>>>>>>> thread
>>>>>>>>>>> could write to the "old" map,
either by getting a recent yet
>>>>>>>>>>> incorrect
>>>>>>>>>>> read off the volatile, or reading it right
before it changes.
>>>>>>>>>> True, since referencing the map and writing to it
isn't atomic.
>>>>>>>>>>
>>>>>>>>>> We could guard access to the map with a
read/write lock. Safe,
>>>>>>>>>> if a
>>>>>>>>>> little heavy-handed... map writers would acquire
a RL (since we
>>>>>>>>>> want
>>>>>>>>>> concurrent access here) but the async flushing
thread would need
>>>>>>>>>> to
>>>>>>>>>> wait
>>>>>>>>>> for a WL to swap the map reference, releasing the
lock after the
>>>>>>>>>> map
>>>>>>>>>> reference has been swapped.
>>>>>>>>> Yeah, I don't think it could be volatile because
it only
>>>>>>>>> applies to
>>>>>>>>> the variable itself, not the contents pointed at. The
R/W lock
>>>>>>>>> approach looks like a valid one and better than
using
>>>>>>>>> synchronized
>>>>>>>>> blocks. Another thing I had in my mind is whether we
need the
>>>>>>>>> copy to
>>>>>>>>> be a ConcurrentMap, we could probably just use a
>>>>>>>>> Immutables.immutableMapCopy().
>>>>>>>> Sync blocks would suck since it is effectively an
exclusive
>>>>>>>> lock. We
>>>>>>>> want non-exclusive (read) locks for threads writing to
this map
>>>>>>>> - we
>>>>>>>> want multiple application threads to do this
concurrently
>>>>>>>> otherwise this
>>>>>>>> becomes a bottleneck. And hence the need for a
ConcurrentMap.
>>>>>>>>
>>>>>>>> And we don't make a copy at all - all the async
flushing thread
>>>>>>>> does is
>>>>>>>> that it creates a new, empty ConcurrentMap and swaps the
'async
>>>>>>>> queue'
>>>>>>>> Map so that application threads can continue logging
their changes
>>>>>>>> somewhere while old changes can be iterated through and
flushed.
>>>>>>>> And for
>>>>>>>> this, the field still needs to be volatile (not strictly
so, since
>>>>>>>> we
>>>>>>>> can rely on the fencing that will happen in the lock as a
side-
>>>>>>>> effect)
>>>>>>>>
>>>>>>>>> I looked at the two implementations
(ConcurrentHashMap(Map)
>>>>>>>>> constructor vs Immutables.immutableMapCopy() which
would
>>>>>>>>> Object.clone() on the map) but I'm not sure which
one would be
>>>>>>>>> faster.
>>>>>>>>> Since clone() would rely on a native clone() impl,
I'd imagine
>>>>>>>>> that to
>>>>>>>>> be faster.
>>>>>>>> Like I said, we don't need a copy. We need a new,
empty map for
>>>>>>>> threads
>>>>>>>> to log changes to while the old one's being flushed.
>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Manik Surtani
>>>>>>>>>> manik(a)jboss.org
>>>>>>>>>> Lead, Infinispan
>>>>>>>>>> Lead, JBoss Cache
>>>>>>>>>>
http://www.infinispan.org
>>>>>>>>>>
http://www.jbosscache.org
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> infinispan-dev mailing list
>>>>>>>>>> infinispan-dev(a)lists.jboss.org
>>>>>>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>> --
>>>>>>>>> Galder Zamarreño
>>>>>>>>> Sr. Software Engineer
>>>>>>>>> Infinispan, JBoss Cache
>>>>>>>> --
>>>>>>>> Manik Surtani
>>>>>>>> manik(a)jboss.org
>>>>>>>> Lead, Infinispan
>>>>>>>> Lead, JBoss Cache
>>>>>>>>
http://www.infinispan.org
>>>>>>>>
http://www.jbosscache.org
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>> --
>>>>>> Galder Zamarreño
>>>>>> Sr. Software Engineer
>>>>>> Infinispan, JBoss Cache
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev(a)lists.jboss.org
>>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> --
>>>>> Manik Surtani
>>>>> manik(a)jboss.org
>>>>> Lead, Infinispan
>>>>> Lead, JBoss Cache
>>>>>
http://www.infinispan.org
>>>>>
http://www.jbosscache.org
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> --
>>> Manik Surtani
>>> manik(a)jboss.org
>>> Lead, Infinispan
>>> Lead, JBoss Cache
>>>
http://www.infinispan.org
>>>
http://www.jbosscache.org
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>> Galder Zamarreño
>> Sr. Software Engineer
>> Infinispan, JBoss Cache
>>
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache