Hi Galder,
I checked out latest version (from subversion trunk) and used it for my test.
but I still have two issues:
1. Using config via code, I get the flush to db working but still no
aggregation. For 100 even I have 100 stores.
Try 1000 puts on the same key and you should see less than a 1000
stores. The thing to bear in mind about the implementation is that the
actual store is done by a number of threads ready to consume whatever is
given to them, so for aggregation to see happening in a unit test, you
need to keep them a bit busy to be able to see keys being aggregated
into the latest value before they're consumed by one of these threads.
IOW, if these threads are not busy enough, it will look as if no
aggreation happens.
Remember that these threads don't act periodically but as soon as
there's something for them to consume. This gives us the guarantees that
async stores are done asap and hence, the store is not synchronized with
memory during a shorter time.
2. Using xml config, no storeLockSafe is called at all?
Thanks for your help,
Amin Abbaspour
On Mon, Jul 20, 2009 at 7:09 PM, Galder
Zamarreno<galder.zamarreno(a)redhat.com> wrote:
>
> On 07/20/2009 02:39 PM, Manik Surtani wrote:
>> On 20 Jul 2009, at 12:35, Amin Abbaspour wrote:
>>
>>> Hi,
>>>
>>> Thanks Galder. I will test CoalescedAsyncStore ASAP and let you know.
>>>
>>> BTW Manik, I think the change is so much that we'd better see this as
>>> a completely new store and later replace AsyncCacheStore or keep them
>>> both.
>> I agree that the impl is significantly different, but from a
>> functional/feature standpoint, what does it offer over and above the
>> AsyncCacheStore, except that it just does the same thing "better" ?
:-)
> I agree with Manik. I would have only left the other implementation if there
> was anything in particular the past implementation offered versus the
> coalesced one. My perf tests indicated that we're as fast as before but with
> the added bonus of lesser cache store calls, so that looks to me a win/win
> situation.
>
> I've now committed
https://jira.jboss.org/jira/browse/ISPN-116 and added an
> FAQ entry to
http://www.jboss.org/community/wiki/InfinispanTechnicalFAQs
>
> I'll talk about this further on a blog entry later this week.
>
> FAO Amin, if you wanna try it out, checkout latest from trunk and let us
> know what you think. No need for configuration changes, simply add<async
> enabled="true" /> and that should be enough.
>
>>
>>> Regards,
>>> Amin Abbaspour
>>>
>>> On Mon, Jul 20, 2009 at 2:47 PM, Manik Surtani<manik(a)jboss.org>
wrote:
>>>> Nope, never got the email below.
>>>>
>>>> Anyway, I always thought the intention was to enhance or improve the
>>>> AsyncCacheStore, not to create another one.
>>>>
>>>> Cheers
>>>> Manik
>>>>
>>>> On 20 Jul 2009, at 10:55, Galder Zamarreno wrote:
>>>>
>>>>> Did other list readers ever got this message?
>>>>>
>>>>> I did send it last week but never actually got it via the dev
>>>>> list, so
>>>>> double checking...
>>>>>
>>>>> On 07/15/2009 07:58 PM, Galder Zamarreno wrote:
>>>>>> Hi,
>>>>>>
>>>>>> As part of this JIRA, I'm thinking whether we wanna leave
the
>>>>>> current
>>>>>> queue based async store.
>>>>>>
>>>>>> For the work I've done, I've created a new class called
>>>>>> CoalescedAsyncStore that works in the way agreed below, iow,
>>>>>> instead of
>>>>>> queying mods, it keeps a ConcurrentMap and which is swapped by a
>>>>>> new
>>>>>> instance when the async thread is going to apply the changes.
>>>>>>
>>>>>> I've run some perf tests between CoalescedAsyncStore and
AsyncStore
>>>>>> and
>>>>>> when doing 1 million store where each key is different, the
>>>>>> performance
>>>>>> is fairly similar. The difference comes when trying to run a
>>>>>> similar
>>>>>> test where the same key is always updated, this results in
lesser
>>>>>> underlying store calls and hence CoalescedAsyncStore solution is
>>>>>> faster
>>>>>> here.
>>>>>>
>>>>>> So, rather than keeping both, I'd suggest replacing
AsyncStore with
>>>>>> the
>>>>>> impl in CoalescedAsyncStore. We also need to look at the
>>>>>> configuration
>>>>>> for the new AsyncStore:
>>>>>>
>>>>>> Configuration wise in AsyncStoreConfig, I'd leave
threadPoolSize,
>>>>>> enabled and add the read/write lock acquisition timeout. The
rest
>>>>>> I'd
>>>>>> get rid as no longer apply:
>>>>>>
>>>>>> int batchSize = 100;
>>>>>> long pollWait = 100;
>>>>>> int queueSize = 10000;
>>>>>>
>>>>>> By default, I'd give 5000ms to the read/write lock
acquisition
>>>>>> timeout.
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>> On 07/09/2009 01:08 PM, Manik Surtani wrote:
>>>>>>> On 9 Jul 2009, at 11:37, Galder Zamarreno wrote:
>>>>>>>
>>>>>>>> On 07/09/2009 11:55 AM, Manik Surtani wrote:
>>>>>>>>> On 8 Jul 2009, at 19:53, Jason T. Greene wrote:
>>>>>>>>>
>>>>>>>>>> Manik Surtani wrote:
>>>>>>>>>>> * Make the concurrent map volatile
>>>>>>>>>>> * When iterating, first create a new
ConcurrentMap and replace
>>>>>>>>>>> the
>>>>>>>>>>> old one with the new one so all concurrent
threads write to
>>>>>>>>>>> the new
>>>>>>>>>>> Map
>>>>>>>>>>> * Iterate over the old map
>>>>>>>>>> That would lead to race conditions since a
concurrent writing
>>>>>>>>>> thread
>>>>>>>>>> could write to the "old" map, either by
getting a recent yet
>>>>>>>>>> incorrect
>>>>>>>>>> read off the volatile, or reading it right before
it changes.
>>>>>>>>> True, since referencing the map and writing to it
isn't atomic.
>>>>>>>>>
>>>>>>>>> We could guard access to the map with a read/write
lock. Safe,
>>>>>>>>> if a
>>>>>>>>> little heavy-handed... map writers would acquire a RL
(since we
>>>>>>>>> want
>>>>>>>>> concurrent access here) but the async flushing thread
would need
>>>>>>>>> to
>>>>>>>>> wait
>>>>>>>>> for a WL to swap the map reference, releasing the
lock after the
>>>>>>>>> map
>>>>>>>>> reference has been swapped.
>>>>>>>> Yeah, I don't think it could be volatile because it
only
>>>>>>>> applies to
>>>>>>>> the variable itself, not the contents pointed at. The R/W
lock
>>>>>>>> approach looks like a valid one and better than using
>>>>>>>> synchronized
>>>>>>>> blocks. Another thing I had in my mind is whether we need
the
>>>>>>>> copy to
>>>>>>>> be a ConcurrentMap, we could probably just use a
>>>>>>>> Immutables.immutableMapCopy().
>>>>>>> Sync blocks would suck since it is effectively an exclusive
>>>>>>> lock. We
>>>>>>> want non-exclusive (read) locks for threads writing to this
map
>>>>>>> - we
>>>>>>> want multiple application threads to do this concurrently
>>>>>>> otherwise this
>>>>>>> becomes a bottleneck. And hence the need for a
ConcurrentMap.
>>>>>>>
>>>>>>> And we don't make a copy at all - all the async flushing
thread
>>>>>>> does is
>>>>>>> that it creates a new, empty ConcurrentMap and swaps the
'async
>>>>>>> queue'
>>>>>>> Map so that application threads can continue logging their
changes
>>>>>>> somewhere while old changes can be iterated through and
flushed.
>>>>>>> And for
>>>>>>> this, the field still needs to be volatile (not strictly so,
since
>>>>>>> we
>>>>>>> can rely on the fencing that will happen in the lock as a
side-
>>>>>>> effect)
>>>>>>>
>>>>>>>> I looked at the two implementations
(ConcurrentHashMap(Map)
>>>>>>>> constructor vs Immutables.immutableMapCopy() which would
>>>>>>>> Object.clone() on the map) but I'm not sure which one
would be
>>>>>>>> faster.
>>>>>>>> Since clone() would rely on a native clone() impl,
I'd imagine
>>>>>>>> that to
>>>>>>>> be faster.
>>>>>>> Like I said, we don't need a copy. We need a new, empty
map for
>>>>>>> threads
>>>>>>> to log changes to while the old one's being flushed.
>>>>>>>
>>>>>>>>> --
>>>>>>>>> Manik Surtani
>>>>>>>>> manik(a)jboss.org
>>>>>>>>> Lead, Infinispan
>>>>>>>>> Lead, JBoss Cache
>>>>>>>>>
http://www.infinispan.org
>>>>>>>>>
http://www.jbosscache.org
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> infinispan-dev mailing list
>>>>>>>>> infinispan-dev(a)lists.jboss.org
>>>>>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>> --
>>>>>>>> Galder Zamarreño
>>>>>>>> Sr. Software Engineer
>>>>>>>> Infinispan, JBoss Cache
>>>>>>> --
>>>>>>> Manik Surtani
>>>>>>> manik(a)jboss.org
>>>>>>> Lead, Infinispan
>>>>>>> Lead, JBoss Cache
>>>>>>>
http://www.infinispan.org
>>>>>>>
http://www.jbosscache.org
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>> --
>>>>> Galder Zamarreño
>>>>> Sr. Software Engineer
>>>>> Infinispan, JBoss Cache
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>> --
>>>> Manik Surtani
>>>> manik(a)jboss.org
>>>> Lead, Infinispan
>>>> Lead, JBoss Cache
>>>>
http://www.infinispan.org
>>>>
http://www.jbosscache.org
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>>
http://www.infinispan.org
>>
http://www.jbosscache.org
>>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
> --
> Galder Zamarreño
> Sr. Software Engineer
> Infinispan, JBoss Cache
>
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache