Manik Surtani wrote:
On 21 Jul 2009, at 11:19, Mircea Markus wrote:
> Hi,
>
> I've extended the original DLD design to also support deadlock
> detection on local caches and updated design forum [1].
Does it also support async transports (1-phase commits)?
No, it is restricted to
sync replications. An ConfigurationException is
thrown if you want to enable DLD with async replicated caches (now I
realize it makes sense to enable it, as DL can still be detected
locally, I'll have to think about that).
The original design did not stand for async replication, but now that
you are mentioning it I think it is doable by treating, on the remote
note, the competing transactions in the same way two local transactions
are treated; I don't see any reason why that would not work. And this
would also increase the throughput - thanks for rising this! :).
> This, together with the replicated deadlock detection is implemented
> in trunk (some minor stuff to do still: DLD for aggregation methods
> like clear and addAll + unit test).
>
> I've also created a benchmark to test what's the throughput (tx/min)
> between caches running with and without DLD.
> You can find full test description within test class:
>
http://anonsvn.jboss.org/repos/infinispan/trunk/core/src/test/java//org/i...
>
> Local DLD does good job (cca 5.5 times better) but replicated DLD
> does extraordinary: cca 101 better throughput (see attached).
This is very interesting, but perhaps a little artificial since your
key pool size is only 10.
Yes, it was designed specially for high collision.
Users can configure
the UT with different params (in this case higher OBJECT_POOL_SIZE) to
benchmark against their own data access pattern.
So you do force a lot of deadlocks as a result. And the time taken
with the non-DLD case would depend on the TM's transaction timeout
configuration which again would vary.
The Dummy TM does not force rollback based on
tx timeout, indeed.
Another important factor it depends on is the lockAcquisitionTimeout.
Still, it's up to the user to benchmark against its very specific scenario.
So as a result I'd be careful about quoting performance increase
factors in a public blog (although you should definitely blog about
this as a feature and how it *could* speed up transactions that would
otherwise timeout).
Point taken. Are you also thinking not to bring up the
diagrams? After
all, they the numbers there are real: so by mentioning the context
(intended high collision) and no tx timeout, they are relevant.
Also, it would be interesting to see how the cache fares with and
without DLD, in a test where there are absolutely no deadlocks. E.g.,
each thread's access patterns access the same keys, but in a way that
would never deadlock. I'd like to see if DLD adds much of an overhead.
Working
on that right now.
Overall though, very cool stuff! :)
Cheers
Manik
> I think DLD is cool stuff and differentiates us a bit from
> competition, afaik none of them have a DLD.
>
> One more thing that's worth mentioning: while running DLD tests I've
> noticed that if all tx acquire locks on keys in same order, then no
> deadlocks exists. This is logic and might seem obvious, but is not
> stated anywhere and the performance increase by doing this might be
> very dramatic. I think we should document this as a best practice
> when it comes to transactions - any suggestion where?
>
> I also intend to blog about it shortly.
>
> Cheers,
> Mircea
>
> [1]
>
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4244838#...
>
>
>
<DLD_local.JPG><DLD_replicated.JPG>_______________________________________________
>
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org