Manik Surtani wrote:
On 23 Jul 2009, at 16:51, Mircea Markus wrote:
> Manik Surtani wrote:
>>
>> On 21 Jul 2009, at 11:19, Mircea Markus wrote:
>>
>>> Hi,
>>>
>>> I've extended the original DLD design to also support deadlock
>>> detection on local caches and updated design forum [1].
>>
>> Does it also support async transports (1-phase commits)?
> I've just finished implementing 1PC.
>>
>>> This, together with the replicated deadlock detection is
>>> implemented in trunk (some minor stuff to do still: DLD for
>>> aggregation methods like clear and addAll + unit test).
>>>
>>> I've also created a benchmark to test what's the throughput
>>> (tx/min) between caches running with and without DLD.
>>> You can find full test description within test class:
>>>
http://anonsvn.jboss.org/repos/infinispan/trunk/core/src/test/java//org/i...
>>>
>>> Local DLD does good job (cca 5.5 times better) but replicated DLD
>>> does extraordinary: cca 101 better throughput (see attached).
>>
>> This is very interesting, but perhaps a little artificial since your
>> key pool size is only 10. So you do force a lot of deadlocks as a
>> result. And the time taken with the non-DLD case would depend on
>> the TM's transaction timeout configuration which again would vary.
>> So as a result I'd be careful about quoting performance increase
>> factors in a public blog (although you should definitely blog about
>> this as a feature and how it *could* speed up transactions that
>> would otherwise timeout).
>>
>> Also, it would be interesting to see how the cache fares with and
>> without DLD, in a test where there are absolutely no deadlocks.
>> E.g., each thread's access patterns access the same keys, but in a
>> way that would never deadlock. I'd like to see if DLD adds much of
>> an overhead.
> I've also run such a test (is the same test as previously, but the
> key set is ordered now -> no deadlocks). The average performance
> decrease is approx 7%. (See attached).
Any idea where the overhead is? Have you run this through a profiler?
There is
additional computation and context switches for DLD
>>
>> Overall though, very cool stuff! :)
>>
>> Cheers
>> Manik
>>
>>> I think DLD is cool stuff and differentiates us a bit from
>>> competition, afaik none of them have a DLD.
>>>
>>> One more thing that's worth mentioning: while running DLD tests
>>> I've noticed that if all tx acquire locks on keys in same order,
>>> then no deadlocks exists. This is logic and might seem obvious, but
>>> is not stated anywhere and the performance increase by doing this
>>> might be very dramatic. I think we should document this as a best
>>> practice when it comes to transactions - any suggestion where?
>>>
>>> I also intend to blog about it shortly.
>>>
>>> Cheers,
>>> Mircea
>>>
>>> [1]
>>>
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4244838#...
>>>
>>>
>>>
>>>
<DLD_local.JPG><DLD_replicated.JPG>_______________________________________________
>>>
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>>
http://www.infinispan.org
>>
http://www.jbosscache.org
>>
>>
>>
>>
>
> <DLD_enabling_overhead.JPG>
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org